With the growth of AI systems and unstructured data, there is a need for an independent means of data curation, evaluation and measurement of output that does not depend on the natural language constructs of AI and creates a comparative method of how the data is processed. Here’s an excerpt from a blog post on this subject written by our CTO John Harney published by big data site KDNuggets:
“For anyone worrying about machines taking over the world, I have reassuring news: The idea of artificial intelligence has been overcome by hype. I don’t mean to belittle AI’s promise or even its existing capabilities. The technology allows organizations to put data to use in ways we could only imagine not that long ago.
“It’s revolutionized the way executives approach strategic planning. But very often lately—when I’m in meetings, reading research papers or listening to an expert’s presentation—I can’t shake the feeling that to many people, terms like “AI,” “machine learning” and “cognitive computing” have become answers unto themselves.
“Today, solutions providers put statements like “AI-driven” or “harnessing the power of machine learning” at the core of their sales pitch. The buzzwords are certainly getting through. One colleague tells the story of a client calling “to make sure AI was included” in their data analysis project. Business people have been sold on the notion that today’s cutting-edge systems analyze data in a black box, then spit out reliable insights. How? They just do.”
Read the full article here: https://www.kdnuggets.com/2018/Let’s Admit It: We’re a Long Way from Using Real ‘Intelligence’ in AI