The topic of artificial intelligence has broken out of computer science labs and into boardrooms across industries, as recently noted in the Harvard Business Review: The buzz over AI has grown loud enough to penetrate the C-suites of organizations around the world, and for good reason. Investment in AI is growing and is increasingly coming from organizations outside the tech space. This news does not mean that Ultron will soon dominate the Fortune 500, but it does hint at the powerful new productivity paradigm that is reshaping enterprises.
What we broadly term "AI" (as opposed to traditional sequential computation) seeks to replicate the human mind's ability to “think” — to understand the underlying context of decision making based on multiple simultaneous inputs. Such technologies are already transforming a number of important business processes. Witness the prevalence of chatbots, facial recognition applications, self-healing networks, or virtual assistants such as Siri or Alexa that “learn” from interactions.
This paradigm has nothing to do with the self-aware cyberbeings popular in sci-fi. Instead, it's deeply rooted in the quest to improve human capability. It is an “augmented intelligence” model defined by the positive-sum synergy between data-driven, technology-enabled analytics and human logic, business acumen, and intuitive contextual awareness.
Simply put, when you combine strong business strategies with powerful analytic tools and capabilities, the impact that teams can have on their businesses is far greater than relying solely on human effort or artificial intelligence alone. Big data and computational capacity growth have enabled incredible advancements in AI, but the deft intelligence of humans still plays an oversized role in translating those advancements into value creation.
Before exploring this augmentation concept further, it’s important to understand AI in the context of some pervasive misconceptions about its capabilities and applications to business.
- “AI is a job killer" -- Automation is not AI. While many lower-value and highly repetitive tasks can and should be replaced by smart, AI-driven processes, this technical and operational migration likely will open up an entirely new category of knowledge jobs defining how AI is best leveraged.
- “AI can effectively boil the ocean" -- The nature of big data collection and storage transformed the sheer amount of information available and led to the false conclusion that artificial intelligence agents could independently find hidden insights and project what to do next. This concept ignores two critical and linked considerations. First, the starting point of “issues identification” must precede analysis, essentially providing logical direction to what would be an indiscriminant process of data crunching. Second, without a scientific method of hypothesis and controlled experimentation, correlation blurs with genuine causation to lead in all the wrong directions (for good examples of this, and a good laugh, check out the Spurious Correlations site.)
- “If left unchecked, AI will take over” — In every practical sense, humans still define and design AI platforms. It remains to be seen if AI will ever have the capacity to eliminate people from that self-learning loop entirely.
What is augmented intelligence?
Augmented Intelligence deals in practicalities. It recognizes and applies the powerful combination of people and technology to define and solve complex business, scientific, and policy challenges. And, the augmentation relationship between human and computer reaches both ways.
The first case is well understood and widely utilized: AI, and analytics more broadly, improves the power of humans to make better, faster, minimally biased, and fact-based decisions (thus augmenting our intelligence). Quantitative models assign propensity scores based on historical data to future actions; algorithms give hedge funds an edge in picking, buying, and selling equities and options; bots automate repetitive or low-risk tasks.
The deficiencies of decision making when impacted by common cognitive biases that were initially discovered by the behavioral economists Amos Tversky and Daniel Kahneman illustrate the tremendous importance of augmenting human intuitive heuristics with data science. Essentially, most of us are susceptible to very common decision-making mistakes: We fail to take into account small or non-representative sample sizes; we base decisions on our most recent experiences rather than a longer period of behavior; we focus on factors that are readily available instead of exploring the full set of information; we anchor to numbers suggested to us versus of thinking rationally through the decision logic.
Data exploration and data science can insulate us from these mistakes by removing our biases. We just have to trust the data over our intuitive flaws.
Less appreciated is the way that human curiosity, intuition, and rigorous scientific methodology actually improve the efficiency and contextual relevance of analytics (humans augment analytic intelligence). This side of the discussion is arguably more interesting in industry, as it preserves and protects the role that people play in creating value for their organizations. Here are a few of the areas in which people substantially augment the power of analytics:
- Actionable segmentation -- Segmentation (or cluster modeling) often results in outcomes that are too numerous to be applied, too broad to be prescriptive, or too high-level to be appropriate for the occasion or journey as marketing becomes more contextual. The mathematics of segmentation are well-known, but it is the touch of the data scientist or business analyst that creates value from identifying the right level of personalization and relevance.
- Business-appropriate dashboards -- More and more businesses are adding to their effectiveness by visualizing their KPIs in interactive dashboards. But without combining a very strong knowledge of business drivers and business objectives (plus a reasonable application of visual design), dashboards can either misrepresent the most critical conclusions or suggest next actions that actually run counter to best interests.
- Hypothesis-driven knowledge acquisition -- Test and learn methodology augments analytics in two critical ways: by providing a means to avoid faulty correlations and find causation in a controlled environment, and by learning from continuously comparing expected to actual results via experimentation on existing models with ongoing streams of field data.
At least for now, we’re not expecting the robots to take over the world. So long as human and artificial intelligence are working in tandem, we’ll continue to make each other better at what we natively do best.
David Rosen leads TIBCO’s Digital Transformation practice. In this role, David collaborates with customers in the midst of their digital journeys, mixing a strong knowledge of technology and a long history of consulting with C-level leaders of some of the world’s largest and most innovative companies. David drives much of TIBCO’s intellectual capital around digital transformation, combining TIBCO’s thought leadership with that of leading strategists and academics. Through his prior leadership of the TIBCO Reward strategy and analytics practice, David has provided value to all of TIBCO’s customers. David is a graduate of Dartmouth College and received his MBA from Stanford’s Graduate School of Business.