Microsoft's Satya Nadella: 6 Must-Have AI Design Principles

Microsoft CEO Satya Nadella wrote an essay published in Slate in which he emphasized AI should not be feared but embraced. He outlined six principles and goals AI research must follow.

Dawn Kawamoto, Associate Editor, Dark Reading

June 30, 2016

3 Min Read
<p align="left">(Image: Henrik5000/iStockphoto)</p>

12 Ways AI Will Disrupt Your C-Suite

12 Ways AI Will Disrupt Your C-Suite


12 Ways AI Will Disrupt Your C-Suite (Click image for larger view and slideshow.)

Despite some predictions that artificial intelligence will one day take over the world, Microsoft CEO Satya Nadella says AI should be embraced and not feared, as he outlined design principles and goals that should be considered when creating the technology.

In his essay published in Slate Tuesday, Nadella discussed the great promise of AI, or advanced machine learning, and how in an AI world, "productivity and communication tools will be written for an entirely new platform, one that doesn't just manage information but also learns from information and interacts with the physical world."

He also listed six principles and goals that should be considered when designing AI to quell concerns in society about the potential for it to harm mankind:

  • "AI must assist humanity." As an example, collaborative AI robots should handle dangerous work, such as mining, that would otherwise put human lives at stake.

  • "AI must be transparent." Information about how AI technology analyzes and sees the world around it should be provided to the public, so people will know how it works and the rules that it operates under.

  • "AI must maximize efficiencies without destroying the dignity of people." Diversity among the people who design AI will play a role toward preserving "cultural commitments" and "empowering diversity." Earlier this year, Microsoft had to silence its chatbot Tay, after it started to tweet racial slurs.      

  • "AI must be designed for intelligent privacy." Personal and group information needs to be secure, in order to earn trust among users.

  • "AI must have algorithmic accountability." This would allow humans to have a panic button to reverse any unintended harm.

  • "AI must guard against bias." The goal is to prevent the wrong heuristics from being used to discriminate.

Nadella added that "there are 'musts' for humans too -- particularly when it comes to thinking clearly about the skills future generations must prioritize and cultivate." Those "musts" include empathy, education, creativity, judgment, and accountability.

"Ultimately, humans and machines will work together -- not against one another. Computers may win at games, but imagine what's possible when human and machine work together to solve society's greatest challenges like beating disease, ignorance, and poverty," Nadella said in his essay.

"The beauty of machines and humans working in tandem gets lost in the discussion about whether AI is a good thing or a bad thing. Our perception of AI seems trapped somewhere between the haunting voice of HAL in 2001: A Space Odyssey and friendlier voices in today's personal digital assistants -- Cortana, Siri, and Alexa."

Microsoft's CEO further noted that rather than debate the good versus evil of AI, time would be better spent on exploring the values held by companies and employees creating machine learning.

[See 10 AI App Dev Tips and Tricks for Enterprises.]

He pointed to the work Cynthia Breazeal, an MIT Media Arts and Sciences associate professor who oversees the university's Media Lab's Personal Robots Group. Breazeal, notes Nadella, observed that while humans are unique among all species for their depth in social and emotional traits, empathy is rarely discussed when designing technology.

Nadella noted a recent conversation he had with Breazeal where she said, "After all, how we experience the world is through communications and collaboration. If we are interested in machines that work with us, then we can't ignore the humanistic approach."

About the Author

Dawn Kawamoto

Associate Editor, Dark Reading

Dawn Kawamoto is an Associate Editor for Dark Reading, where she covers cybersecurity news and trends. She is an award-winning journalist who has written and edited technology, management, leadership, career, finance, and innovation stories for such publications as CNET's News.com, TheStreet.com, AOL's DailyFinance, and The Motley Fool. More recently, she served as associate editor for technology careers site Dice.com.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights