Why Your Company's AI Strategy May Not Be Its Own

Nine big tech firms are deciding your company's fate and even the fate of humanity simply because they have the most control over AI.

Lisa Morgan, Freelance Writer

March 18, 2019

7 Min Read
Image: Sdecoret - stock.adobe.com

More companies are innovating with, experimenting with and adopting AI without regard for the potential long-term consequences of their actions. As it did in the last 50 years, innovation continues to focus on the art of the possible and rapid financial returns while corporate strategies continue to revolve around quarterly profits.

In the meantime, AI is making its way into every industry, transforming traditional products and services into intelligent counterparts that will forever change human-machine relationships. As more autonomous and intelligent systems hit the market, exhibiting both expected and unexpected behaviors, it will become more apparent that organizations must balance time-to-market imperatives with sound risk management strategies.

The Nine Companies Shaping AI's Future

Futurist Amy Webb's forthcoming book, The Big Nine explains why efforts by Google, Amazon, Facebook, Apple, Microsoft, IBM, Baidu, Alibaba, and Tencent may not be in the best interests of other businesses or humanity over the long term. Right now, businesses are moving more of their operations into the cloud as they execute digital transformation strategies and consume more services there, including AI-related services. Getting into a cloud service is easy, getting out is not.

Meanwhile, consumers are freely giving away personal information to these and other companies without any idea of how the companies they interact with will use that information. Most consumers may not realize that The Big Nine understand individuals and their habits better than their friends or family. What's more, The Big Nine continue to invent new ways of capturing more data as evidenced by a steady stream of new smart products.

"The rich tradition of competition and innovation in the U.S. has [allowed The Big Nine to work] pretty much unencumbered," said Webb. "Outside of SEC regulations, all the money flowing into AI has basically come with no strings attached [and] there is no regulation of AI."

In the absence of laws and regulations designed to enforce socially-desirable behaviors, businesses are free to innovate and distribute their creations without regard or for the long-term consequences of their actions. However, some Big Nine employees are taking a stand as evidenced by the Google employees who refused to work on video image recognition for the Pentagon, Microsoft employees who revolted against a $10 billion Pentagon contract and the Army's use of HoloLens and  Amazon employees who are and dead set against facial recognition systems used by police forces and government agencies.

"If this were any other industry, we might see some guardrails or some desire to slow down but we just don't see that in the space," said Webb.

In the absence of a cause that aligns with their own personal ethics, AI system designers and developers, and the strategists above them, tend not to consider the ethical implications of AI. Even if they are cognizant of AI ethics, their career and financial incentives revolve around getting products to market quickly, not pondering the potential scenarios of how those products might be used or abused inadvertently or intentionally.

Amy_Webb-imagecredit-Elena_Seibert.jpg

Most governments and lawmakers are not in a position to respond appropriately yet, as evidenced by the general absence of national AI strategies. A few countries have strategies, such as Canada, Dubai and China. The U.S. has a set of bullet points.

"China is pushing ahead quickly. There's a much more coordinated approach," said Webb. "The money is coming from the Chinese government and part of the speed initiative is not driven by individual demands or shareholder interests but by Xi Jinpin and the CCP's desire to create a new world order with China at the helm. I feel like this is a bad developmental track that we're on, a really bad one."

Risk Modeling Should Be a Priority

While Webb doesn't believe (and there is little evidence to support the fact that) any business is intentionally trying to destroy the future, it is nevertheless unwise to approach AI as if it were "just another technology" since autonomy and machine intelligence distinguish the latest generation of products. Risk management strategies must evolve with the changing nature of technologies.

"A fair chunk of any money that is put into the development of anything related to AI should go to risk modeling which it doesn't now," said Webb. "The point of risk modeling is not just to determine the potential negative consequences in advance, it's to identify where the bugs are."

The problem with anything negative related to AI – its potential risks, building ethics in or otherwise taking a break to consider the broad array of potential consequences won't become an integral part of the innovation process until customers and the sources of capital demand it.

Global Coordination Is Necessary

A number of organizations are attempting to take a global approach to AI policy including the Ethics and Governance of AI (Harvard Berkman Klein/MIT Labs) IEEE's Global Initiative on Ethics of Autonomous and Intelligent Systems and The Future Society's AI Initiative. While their efforts are all global in nature, Webb doubts that any of them can achieve the level of global coordination necessary. Instead, she envisions the establishment of a new entity, which she calls Global Alliance on Intelligence Augmentation (GAIA), but she does not know who would spearhead and operate such an entity yet.

Freedom to Innovate Isn't Free

The desire to be innovative and competitive is driving the rapid adoption of AI across industries. However, the increasing pace of AI adoption fueled only by its potential benefits is risky.

"We can't sit around and debate this for a number of years," said Webb. "I hope people see there is opportunity as well as serious and dangerous problems on our horizon and that we can decide that we're going to figure out a way to move forward."

As companies continue their moves into the cloud and consume the AI tools that are readily available to them, they may not realize the degree to which they are aligning themselves with those companies.

"Everyone knows what a nightmare it is to switch mobile [service providers] or switch your company between Sharepoint and Google Office. We're talking about your company's data. Essentially, your company is an Amazon company, a Google company or a Microsoft company," said Webb. "So many of our systems are optimized and automated, but we don't have a clear enough understanding of who made the decision about what, when and why to optimize."

Rather than pondering what it means to align one's entire business strategy with a certain provider (or two core providers), many IT leaders are deciding among platforms based on sticker price, which gets back to short-term versus long-term thinking.

"Most CIOs are not thinking about the longer-term effects of data governance and ownership and who owns what part of the automation process," said Webb. "This is complicated stuff and these decisions should not be made on dollars and cents alone."

Bottom Line

Technology products and services are continuing their shift from "dumb" to "intelligent" as AI finds its way into more categories of software and systems. Unfortunately, models for technology innovation, technology adoption and technology investment have not evolved with them yet, but they will over time.

While there is a growing roster of public and private organizations that are trying to address the many implications of AI, more effective global coordination needs to take place in the best interests of everyone.

To learn more about AI ethics check out these recent articles.

AI as a Human Right

How to Buy External Data to Fuel Analytics, AI Insights

The Future of AI in America: What All Leaders Should Consider

It’s up to Smart Humans to Stop Being Stupid About AI

Read more about:

2019

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights