Innovation is entering a new stage of maturity as a range of academic and industry organizations ponder the impacts of autonomous and intelligent systems.

Lisa Morgan, Freelance Writer

July 17, 2018

7 Min Read
Image: Shutterstock

There's a lot of discussion about autonomous and intelligent systems these days, but few realize the impact those technologies will soon have on technology design and use. Already, formal and informal groups are debating the potential impacts of AI systems with the goal of articulating values, principles, and best practices that help guide the responsible design and use of such systems.

For example, MIT Media Lab and the IEEE Standards Association (IEEE-SA) jointly announced the formation of the Council on Extended Intelligence (CXI) which intends to "build a new narrative" for AI/intelligent systems (A/IS) technology, inspired by principles of systems dynamics and design. Meanwhile, in the State of Washington, an informal group of business and technology leaders, professors and others may well pen a functional equivalent to the Agile Manifesto for the designers of intelligent systems.

What the MIT Media Lab and IEEE are doing

CXI's initial efforts focus on three areas: participatory design, digital data agency, and metrics that measure prosperity more holistically than traditional indicators such as Gross Domestic Product (GDP). The respective projects are described below; although, the group's first challenge is distinguishing Extended Intelligence from AI and Augmented Intelligence.

"Extended Intelligence is a term developed by a group at the MIT Media Lab to promote a different understanding of the relationship between humans and machines," said Andre Uhl project lead, Extended Intelligence at CXI. "At CXI, we won’t be asking, 'Who is controlling who[m]?' but rather, 'How can we cooperate?”

CXI Executive Director John Havens explained that Augmented Intelligence is human-centric whereas Extended Intelligence has a broader, system-level perspective that considers the relationships among people, machines, and the environment.

JohnHavens_CXI.jpg

CXI intends to create an introduction to Extended Intelligence and Participatory Design that avoids reductionism (oversimplification). The resulting body of work will include articles, webinars and a curriculum.

Meanwhile, the "Digital Identity" project will create a data policy template that governments and organizations can use to help individuals reclaim their digital identity in the algorithmic age.

"First and foremost, people have to understand that digital democracy does not truly exist for [most] of the world’s inhabitants; paradoxically, even not for the ones who are supposed to command the data-machines," said Konstantinos Karachalios, Managing Director of the IEEE-SA. "While GDPR and similar data protections are essential and helpful, they still represent high-level required protections from governments and corporations regarding the access of people’s data, but implementations are still vague."

Andre_Uhl_CXI.jpg

The rise of blockchain and other decentralized, sovereign identity structures may enable individuals to control the clarity and flow of their own data by storing their data in private data clouds and creating their own digital terms and conditions. Since the global implementation of a one-size-fits-all solution is impractical, CXI will provide more general best practices that can be adapted based on country-specific and other contextual considerations.

Importantly, the "Enlightened Indicators" project addresses the need for an evolved set of metrics in light of A/IS technology. The output of that project will be a wellbeing Indicator template that governments and organizations can use to measure prosperity.

"GDP only measures financial and productivity metrics. Similarly, when we compare the wellbeing of two people we typically compare salaries, jobs and a couple of other factors, even though personal wellbeing includes mental health which is overlooked a lot," said CXI's Havens. "When wellbeing is measured in a holistic context than just economic productivity, you get a better sense of how a citizen is doing in the context of a community or country."

CXI will not set standards because it is not a standards body, although it may propose standards in the future. Meanwhile, the IEEE-SA has approved 14 standards projects that have arisen as the result of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Havens is also the executive director of that group, which complements CXI. (Full disclosure: This author just joined the IEEE and its A/IS ethics group to participate in community discussions about important A/IS topics.)

"Part of our work with CXI will be to let the public know they can join [the IEEE] P7000 Working Groups to support the work we are doing with the Council," said IEEE-SA's Karachalios.

Meanwhile in Seattle

Professional services firm Avanade,  cloud natives, large brick-and-mortar companies, startups, universities and others are joining forces to create something akin to the Agile Manifesto for those designing intelligent systems. Right now, there are approximately 40 participants in that group. By comparison, CXI has about 50 members and the IEEE Global Initiative on Ethics of Automation has 1,050. (Group membership numbers don't account for variables such as whether the organization is new or mature, formal or informal, global or regional, and open or closed.)

Konstantinos_Karachalios_Ieee.jpg

The Washington state group is regional and informal, for example, at least for now. Like some other groups, its recent formation was driven by the realization that A/IS technologies necessitate ethical approaches to their design and use.

"Ten or 15 years ago, architects were optimizing for scalability and performance," said Florin Rotar, digital innovation lead at Avanade, who participates in the Washington state group. "Now that the industry is a little more mature, you've got to factor in more aspects of the equation. Innovation and digital ethics are not mutually exclusive; they have to be considered in parallel."

The Washington state group values brevity, as inspired by The Agile Manifesto. By comparison, the IEEE Initiative on Ethics of A/IS's Ethically Aligned Design (EAD) document exceeds 260 pages and includes an incredible number of links to relevant outside documents.

One approach is not better than the other, per se, because they serve different purposes. On one hand, principles and values must be stated simply enough that the maximum number of people can comprehend them. Such documents also have to be written concisely enough to avoid confusion about the tenets themselves. At the same time, there is a need for documents that explain what the critical A/IS issues are so everyone involved in the discussion has access to a common baseline.

Florin_Rotar-Avenade.jpg

"Digital ethics is a really complex topic," said Rotar. "There is a level of principles, almost like a constitution that everybody believes in emotionally and intellectually. Then there is a more pragmatic guidance that needs to be built using those principles on a level which is applicable on a day-to-day basis."

Making sense of it all

A/IS-related ethics groups are forming all over the world with the goal of advancing responsible systems design and use. A common sentiment among them seems to be that traditional approaches to building solutions, innovating, and measuring success are inadequate in light of rapidly-advancing A/IS technologies.

While the fruits of groups' respective efforts may not be obvious or even available yet, technology-related values are already starting to shift in ways that not only will impact A/IS design and use, but also funding, brand identities, corporate values, societal values, lawmaking and law enforcement.

"Designers and engineers need to think differently about design in general because of the data that is utilized to power the algorithms behind the majority of A/IS. Biased datasets can derail positive or reinforce negative unintended consequences of a particular product service or system," said IEEE's Karchalios. "Also, while GDPR and similar practices are positive steps forward, by and large most people cannot access, control, or share their data with any sense of parity to how it is captured and (mis)used by third parties. Moving forward, anyone creating A/IS needs to ensure their products, services or systems are ethically aligned with the values of end users where said values include honoring existing human rights and associated law."

To learn more about ethical and responsible AI, check out these recent articles.

Job Loss and AI: Startups Accentuate the Positive

The Big Data Question: To Share or Not To Share

Doing Computer Vision Without Cameras

Why AI is So Brilliant and So Stupid

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights