October 15, 2018
Keeping up with artificial intelligence, and planning for its impact on businesses, may seem like an insurmountable challenge. AI is already transforming business and industry and will only accelerate further, leading to substantial economic growth.
According to a McKinsey Global Institute report, AI technologies will deliver additional global economic activity of approximately $13 trillion by 2030, or about 16% higher cumulative GDP compared with today. However, AI will also have profound impacts on the social fabric. Job categories will be obsolete, and new job categories will be created, though for many years the results will be uneven with periods of time where losses could be extensive. A new World Economic Forum report claims a net positive outlook for jobs between now and 2022, noting that “75 million jobs may be displaced by a shift in the division of labor between humans and machines, while 133 million new roles may emerge that are more adapted to the new division of labor between humans, machines and algorithms.”
However, a PwC study describes three overlapping cycles of automation that will stretch into the 2030s: the algorithm wave, the augmentation wave, and the autonomy wave. PwC notes that in the first algorithm wave, projected to extend into the early 2020s, automation will replace relatively few jobs. The more impactful waves are expected to come in succession from the mid- to late-2020s and continue into the mid-2030s. According to PwC’s findings, automation will impact 30% of jobs during this timeframe. The political and economic consequences of major job displacement is potentially dramatic.
AI offers great promise, but for many it will also bring painful dislocation. During times of dramatic change when emotions will run hot, business could become a target, not only for basic regulation, but potentially for antitrust investigations and consumer actions such as boycotts. Beyond the clear need to demonstrate algorithm transparency, it is incumbent on businesses to act with a social conscious. This is always good advice, but perhaps never more so than now where uncertainty looms for businesses developing or deploying AI technologies. Following are six key actions companies can take to enhance their leadership and protect their reputation during times of profound change.
Be proactive with policy-making. Companies can wait for events to overtake them and that can lead to less than desirable outcomes, or they can actively participate in critical policy decisions. Microsoft recently took an unusual step for business by agreeing that -- at least in some instances -- AI regulation would be useful. In a blog post from company president Brad Smith, Microsoft called for both corporate responsibility and public regulation for facial recognition technology, which so far has proven to be more accurate for white men than for women or people of color. By establishing this point-of-view, the company demonstrated leadership on an issue of critical importance and has enhanced its public credibility.
Establish and adhere to an AI ethical code. AI use will only expand in the future, inevitably causing many ethical issues to arise as algorithms increasingly operate cars, homes and businesses. There have already been numerous questionable uses of AI, as with deepfakes, though there are many others. Businesses should adopt a code of ethics regarding the use of AI technologies to ensure their behavior is above reproach and to be better stewards of public trust. IBM for one has started doing this with its “Everyday Ethics for Artificial Intelligence.” Similarly, German software powerhouse SAP recently released an ethics code to govern its AI research, aiming to preserve integrity and trust by preventing the technology from infringing on people's rights, displacing workers or inheriting biases from its human designers.
Perform and document rigorous algorithm testing and provide transparent operation. Machine learning algorithms are complex mathematical formulas and procedures and have an increasing impact on people’s lives. As decisions become increasingly governed by these algorithms, the decisions they influence become more opaque and less accountable. It is incumbent on business to be as transparent as possible to explain the operation of their algorithms to those who could be impacted by the decisions they generate.
Demonstrate responsible actions to minimize negative impacts from AI. Business leaders will need to develop a robust workforce plan to meet the challenges of this new AI-powered era. This includes a focus on providing skills training to survive and thrive in an AI world. Improving access to education for their employees, including tuition reimbursement and other incentives, will be useful. Companies should also develop a point of view on guaranteed minimum income programs, which are already being tested in several countries.
Showcase societal benefit of AI application plus real-world impact at scale. People generally like the benefits that AI offers today, such as speech and image recognition, search engines, spam filters, product and movie recommendations and more. However, negative AI use cases also exist including security hacks, phishing scams and personalized disinformation campaigns. Most would agree that the use of AI technologies is fine if there are clear societal benefits. That could be relatively easy to demonstrate for advanced medical uses but likely not as clear for some other fields such as supply chain management. In this latter case, there are benefits to leveraging the vast amounts of data collected by industrial logistics, warehousing and transportation systems to increase efficiency, reduce costs, and potentially lower prices. Businesses deploying AI would benefit by clearly articulating the value the technology offers to their employees and end customers.
Don’t hide your light. Lastly, it is often best to be proactive when taking the steps outlined above and communicating these perspectives with key stakeholders through blog posts, bylined articles, feature stories and public speeches. In doing so, companies are on record as acting responsibly on behalf of their constituencies and society. This goes a long way towards earning trust and serves as an opinion buffer should something go wrong at a future point in time.
Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.
About the Author(s)
You May Also Like