Articles

“Who’s in Charge of Your Business: The Humans or the Machines?”

DataScava commissioned an “Executive Q&A: DataScava, AI and ML” and a series of articles from Scott Spangler, former IBM Watson Health Researcher, Chief Data Scientist, and author of the bookMining the Talk: Unlocking the Business Value in Unstructured Information.” Scott discusses how and why DataScava’s patented precise approach to mining unstructured text data perfectly complements real-world big data applications in AI, ML, RPA, BI, Research, Operations, Talent, and more. In addition, he contrasts our proprietary Domain-Specific Language Processing (DSLP), Weighted Topic Scoring (WTS), and Tailored Topics Taxonomies (TTT) with standard methods such as Natural Language Processing (NLP) and Natural Language Understanding (NLU).

Click to view 

“Who’s in Charge of Your Business: The Humans or the Machines?”

“Executive Q&A: DataScava, AI and ML”

“DataScava and Business Intelligence”

“DataScava and RPA”

 In this first article in the series, Scott discusses:

  • The pitfalls of using a fully automated approach to critical decision-making.
  • The desirability of having a parallel human-machine partnership that regulates and monitors the inputs and outputs of automated approaches.
  • The three basic ingredients that are needed to make that hybrid process successful and how DataScava implements each of these components.

View the published article

Here’s an excerpt

“Algorithms will be more effective in the long run if they are part of a more holistic framework that includes user-controlled domain-specific ontologies, statistical analysis, and rule-based reasoning strategies. These are the basic ingredients that a tool like DataScava provides.

DataScava . . .

“Is a robot ally in humanity’s struggle for control of how we utilize big data to make decisions. By providing tools for capturing the key underlying topics and rules that govern important concepts of the business needs, it evens the playing field so that machine learning no longer has to have the final say on critical business decisions.

Can supervise the process based on human-provided expertise and determine which data to use for training and which to avoid, as well as in which situations to trust deep learning decisions and when to fall back on more rule-based approaches. Such processes put the humans back in charge and allow the machines to serve their intended role as adjuncts and trusted advisors.

In partnership with a trained human mind – can act effectively as a tool for giving the left brain an equal say in big data decision-making tasks.

Can play a leading role in helping businesses manage and maintain their big data more efficiently using information ontologies, statistics with visualization and rule-based approaches.

Perfectly complements existing approaches to unlocking the value of unstructured text data – by helping companies to model higher-level intents and purposes behind the labeling and classification of data – by capturing the abstract topics and themes that represent their own business and subject matter expertise – and by applying both to big data sets real-time.

Provides a practical, easy-to-use tool-set for capturing the critical business ontologies that provide the critical bridge between unstructured text data analysis using standard data science techniques and the human expertise that gives your business its competitive edge.

When a deep learning system and DataScava agree on a classification, that’s ideal because then we now have a plausible explanation for why the deep learning algorithm decided the way it did.

Can help data professionals and business people use machine and human intelligence together to make their messy unstructured text data more accessible, understandable and actionable.”