The interlacing between humans and AI
Leverate’s Adinah Brown looks at the commercial development and application of Artificial Intelligence, citing that Google and IBM also seem to be focusing on a broad approach, indicating that developing AI is a massive task with no clear end in sight. Chief innovation officer at IBM Bernie Meyerson, in discussing the advances of Watson, described it as “just the first step on a very, very long road.”
The near future seems likely to bring Asimov’s vision of the future into existence. The development of Artificial Intelligence (AI) has begun, with many of the world’s biggest technology companies fielding verticals all dedicated to AI. Endeavors like Google.AI, Microsoft Research AI or IBM’s Watson are all very much part of their respective company’s future plans.
Whether it is deep learning, digital reasoning, machine learning, or any other variation of the activity, the reality of a world saturated with AI technology appears to be fast approaching.
Microsoft’s dedicated AI division was announced in late 2016, with a group of 5000 lead by Harry Shum. Microsoft had been working in AI for some time as part of their general research development, but this development solidified Microsoft’s commitment to furthering their AI work.
Their website proudly states that “MSR AI pursues use of machine intelligence in new ways to empower people and organizations, including systems that deliver new experiences and capabilities that help people be more efficient, engaged and productive.” These early efforts were focused around infusing AI into their software solutions to provide improvements in use and capabilities.
But recent developments have broadened this approach, through “unravelling the mysteries of human intellect, and use this knowledge to develop a more general, flexible artificial intelligence”.
Google and IBM also seem to be focusing on a broader approach, indicating that developing AI is a massive task with no clear end in sight. Chief innovation officer at IBM Bernie Meyerson, in discussing the advances of Watson, described it as “just the first step on a very, very long road.”
An unexpected perspective in AI discussions shows an interest in health care, but then the appeal becomes obvious, when you consider the need to base health care decisions on medical research data. With the amount of medical research available and the limitations of the human mind, a big data solution has always been something that would immediately provide strong positive benefits.
If doctors are reading on average half a dozen research papers a month, a solution like Watson, which can read 500,000 in 15 seconds, is an unfathomable benefit.
Currently, the focus of medical AI has been on supporting doctors in their diagnoses. This is mimicked in other industries where digital reasoning software is augmenting human roles rather than replacing them. Programs like Textio, which specializes in natural language processing, is being used to help companies with job interviewing process. Lex Machina is a product that helps lawyers analyze judges and attorneys.
The reason for the limited role of AI as a software solution that analyzes big data is due to the limitations of its capabilities at this point. To date, AI solutions are basically an advanced labor saving tool, much like recent technological advances.
But the future of AI hold the promise of greater involvement and encroachment on both human labor and other endeavors. This future, an offshoot of AI called AGI (Artificial General Intelligence), will have the capacity to produce robots that can effectively undertake any job or activity. A predicament that has sparked feverish discussions about life dominated by AI and the possibility of the last great industrial revolution.
The time of the robots may not have arrived, but the future of AI is certainly the brave new world of a science fiction past.