HAL may not take over your spaceship. The Cylons may never try to end the human race. Many AI researchers will chuckle at these science fiction scenarios and talk about how it wasn't too long ago that machine learning was still working on the problem of identifying a cat in photographs. For the most part, they are right.
With the exception of some technology giants who have been working in the field for years, when it comes to artificial intelligence, or AI, enterprise organizations and society as a whole is really just at the very beginning of exploring the value, potential, and greater overarching implications of these technologies.
Enterprises, in many cases, know they need to invest in AI and the technologies that underlie AI -- such as machine learning, deep learning, natural language processing, and computer vision -- but they are still struggling with how to get from point A to point B with their initiatives.
Regardless of how fast or slow each enterprise arrives, AI is indeed powering autonomous vehicles, customer service and marketing bots, consumer loan scoring algorithms, social media news feeds, and other processes that impact human lives in both trivial and profound ways.
To better understand how these AIs work in the world, researchers from MIT Media Lab, along with a group of other researchers from educational institutions and from Google, Facebook, and Microsoft, are calling for a new scientific discipline called "Machine Behavior".
"Animal and human behaviors cannot be fully understood without the study of the contexts in which behaviors occur. Machine behavior similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate," the researchers wrote in a paper published in the science journal Nature this month.
Right now the people who study machine behavior are the computer scientists, robotics experts, and engineers who have created the machines in the first place, according to the researchers. They argue that while these scientists may be expert mathematicians and engineers, their focus is rightfully on tuning algorithmic performance.
"Methodologies aimed at maximized algorithmic performance are not optimal for conducting scientific observation of the properties and behaviors of AI agents," the researchers wrote.
What's more, they aren't trained in the study of behaviors.
This new field of research would take the study of AI beyond computer science into biology, economics, psychology, and other behavioral and social sciences, according to a blog post on Medium by MIT Media Lab
The researchers say that it's important to study machine behavior because of the ever-increasing roles that algorithms play in our daily lives.
"Because of their ubiquity and complexity, predicting the effects of intelligent algorithms on humanity -- whether positive or negative -- poses a substantial challenge," the researchers wrote.
The paper published in Nature offers several examples. For instance, if a developer creates an algorithmic automated trading strategy, that strategy might be copied as the developer moves from one company to another company. Or they could be reverse-engineered by rivals.
Another example comes from social media. An organization's objective to maximize engagement on a social media site may lead to algorithms that just show users/readers the posts that they are likely to like.
Such algorithms could increase political polarization and the spread of fake news, according to the paper.
"However, websites that do not optimize for user engagement may not be as successful in comparison with ones that do, or may go out of business altogether," the researchers wrote.
Autonomous cars present another great example. Autonomous cars that don't prioritize the safety of their own passengers because they also weigh the safety of pedestrians or other cars, may be a harder sell to customers.
These examples are a small sample of some of the big questions raised in the paper and that lie ahead for researchers studying machine behavior.
"Understanding the behaviors and properties of AI agents -- and the effects they might have on human systems -- is critical," the paper concluded. "Society can benefit tremendously from the efficiencies and improved decision-making that can come from these agents. At the same time, these benefits may falter without minimizing the potential pitfalls of the incorporation of AI agents into everyday life."