10 Top Barriers to AI Success
Enterprises seeking the potential benefits of artificial intelligence need to overcome technological, organizational, and cultural challenges.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt344197766d0d29f2/64cb54e8574c3653c6e91c22/00AIBarrierIntro.jpg?width=700&auto=webp&quality=80&disable=upscale)
Enterprises are rushing to adopt artificial intelligence, but the path to deployment isn't always smooth. While the technology has improved significantly in recent years, it doesn’t always live up to the hype. And organizations continue to face technological, cultural, and organizational challenges to their AI initiatives.
In a Vanson Bourne survey sponsored by vendor Teradata, 80% of the IT and business decision-makers responding reported that their organizations were using some form of AI in production. However, 91% anticipated “significant barriers” to the adoption of the technology.
Surveys conducted by analyst firm Gartner found lower adoption numbers, but still revealed intense interest in AI technology. According to Gartner research, making progress with AI is one of CIOs’ top priorities for 2018.
“In 2018, organizations will strive to improve their understanding of what AI is best suited to, and how to deploy it,” said Chirag Dekate, research director at Gartner. “By 2020, 85% of CIOs will be piloting AI programs through a combination of buy, build, and outsource efforts.”
However, in the same blog post, the firm noted that “CIOs will have to overcome challenges” if they want to be successful with their AI projects.
What are those challenges, and what can enterprises do to overcome them and experience the full benefits of AI technology?
The following slides highlight 10 top barriers to AI and suggestions for moving past them.
Thanks in part to decades of science fiction portraying AI as the enemy, many people are worried about the potential consequences of the technology. Well-known figures like Elon Musk, Mark Cuban, Steven Hawking and others have warned that AI could pose an existential threat to humanity. On a smaller scale, a 2018 Gallup poll found that 73% of Americans believe that AI will take away jobs.
IT leaders interested in introducing AI technology into their organizations may find that other employees have strong negative reactions to the idea. At Google, employees objected and, in some cases, resigned over the company’s AI plans.
The recent pledge by tech leaders not to use AI to create weapons may help to alleviate some of these fears, but IT leaders should still be prepared for some resistance to the idea of AI. Experts recommend introducing the technology slowly, being transparent about how and why the company is using it, and directly addressing employee concerns about whether they will lose their jobs.
Fear often contributes to another potential barrier to AI: lack of executive support. Business leaders who don’t understand the technology or how it can be useful fail to champion AI initiatives. In fact, an IDC survey sponsored by vendor DataRobot found that 49% of enterprises deploying AI technology experienced challenges with stakeholder buy-in, making it the second most common barrier to adoption.
One way to overcome this barrier may be to emphasize the potential benefits of the technology. The same survey found that the top business benefits of AI included increased employee productivity, increased process automation, and uncovering new insights. The report also recommended starting AI adoption with data-rich processes and well-defined business cases in order to establish an early track record of success with the technology.
When stakeholders are less than enthusiastic in their support of a new initiative, they are unlikely to allocate adequate funding. That seems to be happening in some cases with AI.
In the Vanson Bourne/Teradata survey, 30% of those surveyed said that their companies weren’t currently investing enough in AI to keep up with competitors in their industry. In the Americas, the percentage was even higher, with 37% citing lack of budget as a key barrier to adoption.
However, this barrier may become less of a challenge in the near future as many of the same respondents said that their organizations planned to ramp up AI spending in the next 36 months.
From a technological perspective, one of the greatest challenges to AI is a lack of good data. Despite their large and growing stores of big data, many tech leaders say that they don’t have enough of the kind of data they need to support their AI efforts. In the IDC/DataRobot survey, 57% of those surveyed cited a lack of data and skills sets as the number one barrier to AI, and in the Vanson Bourne/Teradata survey, 23% of respondents said they didn’t have enough data.
A bigger problem may be the issue of data quality. Gartner noted that many CIOs “are dealing with data of poor or uncertain quality.” Before they can use their data to train AI models, most data scientists need to embark on lengthy projects to clean their data sets — otherwise the insights they derive from the data are of dubious value.
Closely related to the issue of data quality is the very real possibility that bias can creep into AI training data. Developers and data scientists don’t intend to use data that skews AI performance, but the inherent biases in society nevertheless find their way into AI systems. Well-publicized examples include a Google image search that misclassified black people as gorillas and AI-based hiring algorithms that favor white men over women and minorities.
Because AI systems are traditionally trained with historical data — data that may reinforce inherit societal biases — this problem is particularly difficult to overcome with technology alone. Even Google found it easier to simply remove the gorilla tag from its image search than to fix the root problem. While waiting for a better technical solution, IT and business leaders should, at a minimum, make themselves aware of the problem and scrutinize decisions that could be based on insights from biased systems.
Another key challenge for enterprises and technology companies looking to expand their use of AI is that there simply aren’t enough qualified AI professionals to fill all the job openings. In the Vanson Bourne/Teradata survey, 34% of those surveyed pointed to a lack of access to talent and understanding as a key barrier to AI. By one estimate, there are about 22,000 AI experts in the world, of whom 3,000 are open to new job opportunities. But there are about 10,000 AI jobs available in the United States alone.
An O’Reilly report on How Companies Are Putting AI to Work Through Deep Learning noted, “AI talent is scarce, and the increase in AI projects means the talent pool will likely get smaller in the near future.” To deal with the shortage, it recommended, “Organizations may be able to get past the skills gap by hiring developers with strong software skills and providing on-the-job training to get them up to speed on AI and deep learning.”
Many enterprise IT leaders also worry about a lack of IT infrastructure as a barrier to AI. Creating and training models requires huge amounts of data, as well as very fast systems. High-performance computing systems are very expensive, which drives up the costs of deploying AI. That makes it unsurprising that in the Vanson Bourne/Teradata study, 40% of those surveyed cited a lack of IT infrastructure as a barrier to AI adoption.
The obvious solution to this problem is for organizations to use a cloud-based AI solution. All the leading cloud vendors offer machine learning, analytics, and cognitive computing services, and the prices are just a fraction of what it would cost firms to buy the infrastructure necessary to support AI. However, this may not be an option for some organizations that are prevented by regulation from storing some types of data in the cloud.
While proponents have very high expectations for AI technology, some say that it hasn’t yet proven its worth. Recent news reports called into question the capabilities of the IBM Watson for Oncology system, citing internal documents that showed the AI recommending “unsafe and incorrect treatment.” In the Vanson Bourne/Teradata survey, a full third of those surveyed said “AI technology is still nascent and unproven.”
However, many experts caution that it would be unwise to wait until AI has fully demonstrated its worth before piloting AI projects. With so many enterprises investigating AI technology, any organizations that decide to sit on the sidelines may soon find themselves left behind.
Those concerns about unproven technology relate very closely to the risk of failed projects. In its Predictions 2018, Forrester forecasted, “In 2018, 75% of AI projects will underwhelm because they fail to model operational considerations, causing business leaders to reset the scope of AI investments — and place their firms on a path to realizing the expected benefits.”
The risk here is that organizations will scale back or eliminate their AI efforts if they aren’t initially successful. To avoid this possibility, IT leaders should choose their pilot projects very carefully, making sure that they have a strong business case and a project that is closely aligned with business goals. They also need to manage expectations carefully, making sure to emphasize that AI technology is improving rapidly over time.
Enterprises that want to deploy AI also need to be cognizant of their compliance requirements. Regulations like the EU’s GDPR place very stringent limits on how personal data can be stored and used, and those rules may affect AI systems, particularly the data used to train AI models.
In a broader sense, lawmakers and pundits are actively debating whether governments should pass laws specifically regulating how AI can be used. IT leaders would do well to monitor those debates and perhaps consider lobbying efforts to ensure that any proposed and passed legislation accounts for the needs of enterprise IT.
Enterprises that want to deploy AI also need to be cognizant of their compliance requirements. Regulations like the EU’s GDPR place very stringent limits on how personal data can be stored and used, and those rules may affect AI systems, particularly the data used to train AI models.
In a broader sense, lawmakers and pundits are actively debating whether governments should pass laws specifically regulating how AI can be used. IT leaders would do well to monitor those debates and perhaps consider lobbying efforts to ensure that any proposed and passed legislation accounts for the needs of enterprise IT.
-
About the Author(s)
You May Also Like