Striding along Omotesando Street in Harajuku, Tokyo in summer 2 years ago, I came across Pepper, an emotionally intelligent humanoid robot created by Softbank Robotics Holdings Group (SBRH) in one of its more grandiose Softbank Mobile stores. She greeted and guided customers through the shop. She danced to requests and wittily answered customers’ questions on topics which ranged from product information and weather forecasts to their love life.
In January 2016, IBM announced a collaboration with SBRH, which gave Pepper a cognitive computing system named Watson which allowed her to understand sophisticated semantic context through natural language processing and process a humongous volume of data including “dark”, or unstructured, data from social media, video, images and text. With her cognitive capability enhanced, Pepper can now provide customers with in-depth analysis of products and services based on their needs and personal information [See Pepper in Mizuho banks].
To be clear, prior to the collaboration, Pepper already had an artificial intelligence that uses pattern recognition to “read” customers’ feelings through their facial expressions and tone of voice. Pepper is also connected to the cloud where data is processed. This information accumulates as Pepper gains experience. So what, if anything, distinguishes Pepper’s original artificial intelligence with the new capabilities she received from Watson’s cognitive computing system?
Definitions are still being developed in this emergent field, but it would seem safe to say that artificial intelligence is a broader discipline that encompasses natural language processing, social intelligence and machine learning among other tools, many of which a cognitive computing system also uses. The most notable feature that sets cognitive computing apart from artificial intelligence, for now, would appear to be its relationship with users.
As Steve Hoffenberg pointed out: “In an artificial intelligence system, the system would have told the doctor which course of action to take based on its analysis. In cognitive computing, the system provides information to help the doctor decide.” For example, in the healthcare industry, IBM’s Watson is helping oncologists keep abreast of the latest developments in the field by scouring through unexplored research every day rather than telling the oncologists what they should do. In other words, cognitive computing augment human capability and further our expertise.
Any discussion about advanced cognitive technology inevitably leads to the question on whether robots will eventually replace human labor. An Economist report entitled “Lifelong Learning: How to survive in the age of automation”, which appeared in the 14th-20th January 2017 edition, provides useful insights on this question. According to the report, computer science and programming is the second most offered subject on massive open online courses (MOOC) like Coursera and Udacity.
Moreover, 49% of top paid job openings in America require candidates to have coding skills. Even marketing professionals nowadays might need to understand data analytics, SEO optimization or how to develop advanced algorithms. This suggests that many who follow traditional, linear, and “safe” career paths will increasingly feel the pressure to invest in technology skills.
So what will it mean if companies incorporate cognitive computing technology into their business models whereby computer systems generate insightful recommendations, visuals and graphs or process a million pieces of data in seconds? Do we still need to learn how to write code and algorithms if cognitive computing can do the job? Will today’s prudent learners of technology skills be looking for a new niche in the near future?
I recently had the chance to discuss this matter with Jason Wang, a director of Financial Services at Baidu China, after his talk on “Artificial Intelligence Disruption in the Financial Industry”. According to Jason, there are still many hurdles that we need to cross before cognitive computing or artificial intelligence systems will be able to do our jobs. Collecting the huge volumes of data that are needed to train cognitive computer systems, like IBM’s Watson, remains a mammoth task. Data scientists still need to refine unstructured datasets before it can be used. And professionals in every industry will continue to work closely with computers to solve problems for consumers, just like they always have.
It is time to adjust our mindset and focus on learning how to work WITH these new computer systems and allow them to augment our skills. Jason put it nicely when he said, “it will not be either or but both humans and robots working side-by-side in future decision-making processes”.
Anh Dang is a Master student of Analytical Political Economy at Duke University. Before arriving in America, she studied at Waseda University in Tokyo, Japan. She also lived briefly in Australia and Bangladesh. She is an aspiring management consultant and likes meeting people.