AI Rushes Forward Driven by a Sense of Urgency

We don’t know where we are going with artificial intelligence, but we are going there fast.

Guest Commentary, Guest Commentary

August 27, 2019

5 Min Read
Image: sdecoret - stock.adobe.com

A recent survey showed that Americans are evenly split on how they view artificial intelligence (AI). According to Blumberg Capital, an early-stage venture capital firm, 50% of American consumers feel “optimistic and informed” about AI while the other half feel “fearful and uninformed.” Regardless of whether the results are statistically meaningful -- “no estimates of sampling error can be calculated” as the respondents were entirely random -- the results are plausible. After all, AI is variously described as more profound than electricity and fire or tantamount to summoning the demon.

Fearful or not, ready or not, we are hurtling ahead towards realizing the benefits or curse of AI. Investment fund MMC estimates that AI adoption has tripled in just 12 months. Based on an annual AI Index review, a key takeaway is that commercial and research work in AI, as well as funding, is exploding pretty much everywhere on the planet. 

Softbank Group, arguably the largest investor in AI, recently revealed their Vision 2 venture capital fund with plans to inject $108 billion into startup AI ventures. This is roughly equivalent to the entire global AI startup investment from 2012 through mid-2018. According to the Softbank, “the objective of the fund is to facilitate the continued acceleration of the AI revolution through investment in market-leading, tech-enabled growth companies.” This growth is expected to drive global GDP growth 14% higher in 2030 as a result of AI -- the equivalent of $15.7 trillion, which is nearly the current combined output of China and India. 

Companies and governments are rushing to embrace and integrate AI. Leading AI advocates such as Andrew Ng are encouraging companies to jump into AI use sooner rather than later. Research suggests that companies that fall behind in AI adoption might not ever catch up. Northeastern University professor Nada Sanders said recently that “organizations that take a measured and piecemeal approach to implementing emerging technologies will fall off the map, fade into irrelevance.” A recent op-ed argues that nations should be doubling down on AI research and development to remain competitive. It’s definitely a global race to see who will dominate with AI. Mark Cuban has famously said that the world's first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of.

All this change and the value it is creating is being driven by “narrow” or “weak AI,” algorithms that are incredibly proficient at a single task. Impressive as these algorithms are for discovering new drugs, forecasting volcanic eruptions and even for deploying personalized meditations but they cannot share insights across information domains. Artificial General Intelligence (AGI), or “strong AI,” is the type of intelligence that would do this and lead to a machine that performs any task a human could. AGI does not yet exist but is widely seen as the holy grail. Ilya Sutskever, founder and research director of artificial intelligence research firm OpenAI, said in a conference presentation that “near term AGI should be taken as a serious possibility.” AGI would send the market for AI technologies into hyperdrive.

The accelerating push to AGI received a huge injection in just the last month. The decision by Microsoft to invest $1 billion in OpenAI feels like jet fuel poured onto an already roaring AI blaze. Specifically, the goal of the investment is in service towards development of AGI. OpenAI founder and CEO Sam Altman said upon release of this news that AGI “will be the most important development in human history. When we have computers that can really think and learn, that’s going to be transformative.”

It’s not a time for business as usual

Once AGI arrives, humans may not be able to compete. At least that’s a worry of Elon Musk and a principle reason for why he founded Neuralink in 2017. Recently, the company revealed their plans for a direct brain-machine interface (BMI). In the near future, a computer chip could be installed in a human brain and then communicate with other devices over a wireless frequency. Initially this is envisioned as a technology to enable the disabled to regain motor and cognitive function using AI to bridge a role in human thought patterns for those who need the boost. The eventual goal according to Musk is “to ultimately…achieve a symbiosis with artificial intelligence. We can effectively have the option of merging with AI.” He argues that a BMI to AI would give humans the ability to keep up and compete.

A BMI such as envisioned by Neuralink might ultimately prove to be a pipe dream, but the already impressive innovations leave many feeling that AI advances are happening far too quickly. This is largely due to the potential downsides of the technology including worries about ethical uses, widespread unemployment or an onslaught of AI-powered fake reality. This view is that as a society, we should take our time to get it right, that racing ahead in pursuit of profit or for military supremacy is wrongheaded especially if we end up racing to the bottom with undesirable outcomes such as oppressive surveillance. There’s also the perspective voiced by Yoshua Bengio, considered one of the fathers of AI. “… as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.”

Being thoughtful is well and fine, but the urgent pressures to remain competitive amidst massive disruption make this seem unrealistic. We are rushing towards AGI and BMIs. These technologies sound futuristic but it’s possible they will arrive reasonably soon, possibly within a generation, and with unknown impact. It’s hardly a time of business as usual. Perhaps this is why companies, governments and think tanks are hiring science fiction writers to help them better visualize and plan for an uncertain future. The truth is, we don’t know where we are going with AI but driven by a sense of urgency, we are going there fast.

Gary_Grossman-Edelman.jpg

Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights