Artificial Intelligence: Disruption or Opportunity?

AArtificial intelligence (AI), one of twenty core technologies I identified back in 1983 as the drivers of exponential economic value creation, has worked its way into our lives. From Amazon’s Alexa and Facebook’s M to Google’s Now and Apple’s Siri, AI is always growing — so keeping a closer eye on future developments, amazing opportunities, and predictable problems is imperative.

IBM’s Watson is a good example of a fast-developing AI system. Watson is a cognitive computer that learns over time. This cognitive AI technology can process information much more like a smart human than a smart computer. IBM Watson first shot to fame back in 2011 by beating two of Jeopardy’s greatest champions on TV. Thanks to its three unique capabilities — natural language processing; hypothesis generation and evaluation; and dynamic learning — cognitive computing is being applied in an ever-growing list of fields.

Today, cognitive computing is used in a wide variety of applications, including health care, travel, and weather forecasting. When IBM acquired The Weather Company, journalists were quick to voice their amusement. However, IBM soon had the last laugh when people learned that the Weather Company’s cloud-based service could handle over 26 million inquiries every day on the organization’s website and mobile app, all while learning from the daily changes in weather and from the questions being asked. The data gleaned from the fourth most-used mobile app would whet the appetite of the permanently ravenous IBM Watson and enable IBM to increase the level of analytics for its business clients.

Weather is responsible for business losses to the tune of $500 billion a year. Pharmaceutical companies rely on accurate forecasts to predict a rise in the need for allergy medication. Farmers’ livelihoods often depend on the weather as well, not only impacting where crops can be successfully grown but also where the harvest should be sold. Consider the news that IBM followed its Weather Company purchase by snapping up Merge Healthcare Inc. for a cool $1 billion in order to integrate its imaging management platform into Watson, and the dynamic future of AI becomes more than evident.

The accounting industry can benefit from this technology, as well. When I was the keynote speaker at KPMG’s annual partner meeting, I suggested that the company consider partnering with IBM to have Watson learn all of the global accounting regulations so that they could transform their practice and gain a huge advantage. After doing their own research on the subject, the KPMG team proceeded to form an alliance with IBM’s Watson unit to develop high-tech tools for auditing, as well as for KPMG’s other lines of business.

Thanks to the cloud and the virtualization of services, no one has  to own the tools in order to have access to them, allowing even smaller firms to gain an advantage in this space. Success all comes back to us humans and how creatively we use the new tools.

IBM’s Watson, along with advanced AI and analytics from Google, Facebook, and others, will gain cognitive insights mined from the ever-growing mountains of data generated by the Internet of Things (IoT) to revolutionize every industry.

Advanced AI is promising almost limitless possibilities that will enable businesses in every field to make better decisions in far less time. But at what price? Many believe the technology will lead directly to massive job cuts throughout multiple industries. and suggest that this technology is making much of the human race redundant.

It is crucial to recognize how the technological landscape is evolving before our eyes during this digital transformation. Yes, it is true that hundreds of traditional jobs are disappearing, but it’s also important to realize the wealth of new roles and employment opportunities arriving that are needed to help us progress further.

The rise of the machines started with the elimination of repetitive tasks, such as those in the manufacturing environment, and it is now moving more into white-collar jobs. The key for us is not to react to change, but to get ahead of it by paying attention to what I call the “Hard Trends” — the facts that are shaping the future — so that we can all anticipate the problems and new opportunities ahead of us. We would do well to capitalize on the areas that computers have great difficulty understanding, including collaboration, communication, problem solving, and much more. To stay ahead of the curve, we will all need to learn new things on an ongoing basis, as well as unlearn the old ways that are now holding us back. Remember, we live in a human world where relationships are all-important.

We need to be aware of the new tools available to us, and then creatively apply them to transform the impossible into the possible. By acquiring new knowledge, developing creativity and problem-solving skills, and honing our interpersonal, social, and communication skills, we can all thrive in a world of transformational change.

Are you reacting to change or paying attention to the Hard Trend facts that are shaping the future?

If you want to anticipate the problems and opportunities ahead of you, pick up a copy of my latest book, The Anticipatory Organization.

Shaping the Future of A.I.

One of the biggest news subjects in the past few years has been artificial intelligence. We have read about how Google’s DeepMind beat the world’s best player at Go, which is thought of as the most complex game humans have created; witnessed how IBM’s Watson beat humans in a debate; and taken part in a wide-ranging discussion of how A.I. applications will replace most of today’s human jobs in the years ahead.

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come. Early rule-based A.I. applications were used by financial institutions for loan applications, but once the exponential growth of processing power reached an A.I. tipping point, and we all started using the Internet and social media, A.I. had enough power and data (the fuel of A.I.) to enable smartphones, chatbots, autonomous vehicles and far more.  

As I advise the leadership of many leading companies, governments and institutions around the world, I have found we all have different definitions of and understandings about A.I., machine learning and other related topics. If we don’t have common definitions for and understanding of what we are talking about, it’s likely we will create an increasing number of problems going forward. With that in mind, I will try to add some clarity to this complex subject.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.  

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learning and/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement.  For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.  

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.   

A.I. advances represent a Hard Trend that will happen and continue to unfold in the years ahead. The benefits of A.I. are too big to ignore and include:

  1. Increasing speed
  2. Increasing accuracy
  3. 24/7 functionality
  4. High economic benefit
  5. Ability to be applied to a large and growing number of tasks
  6. Ability to make invisible patterns and opportunities visible

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. Yes, A.I. — like all technology — will take the place of many current jobs. But A.I. will also create many jobs if we are willing to learn new things. There is an old saying “You can’t teach an old dog new tricks.” With that said, it’s a good thing we aren’t dogs!

Start off The New Year by Anticipating disruption and change by reading my latest book The Anticipatory Organization. Click here to claim your copy!