AI Robots Are Here. Are We Ready?

Robots are getting smarter and more intuitive. Can people survive the competition?

John Edwards, Technology Journalist & Author

May 25, 2023

5 Min Read
Business job applicant man competing with cartoon robots sitting in line for a job interview.
Ivan Chiosea via Alamy Stock

Robots are getting smarter, thanks to rapidly advancing artificial intelligence technology. Exactly how smart and innovative robots will become over the next few years remains an unanswered question, but many experts are already making some educated guesses.

We don’t yet have robots with generative AI built in -- the type of conversational capacity demonstrated by GPT-4 and its kin, observes Karla Erickson, a professor of sociology at Grinnell College. “When that approach to artificial intelligence is combined with advanced robotics in terms of skilled movement and dexterity, it will be both remarkable and be a little startling,” she says. “For many decades, robotic advances and artificial intelligence have largely been developing separately, but they will increasingly combine -- an artificial mind for an artificial body.”

AI robots already exist, although the intelligence is not human-like, and we certainly have not yet reached the level of general artificial intelligence, says Nancy J. Cooke, director of the Center for Human, AI, and Robot Teaming, and a professor in human systems engineering at Arizona State University.

It’s safe to say that AI technology has come a long way, but there’s still a lot left to explore when it comes to advanced capabilities, says Cooke, a past president of the Human Factors and Ergonomics Society. “AI algorithms don’t think or learn like humans,” she notes. “There are many differences in processing speed and memory.” Nevertheless, even at the current stage, there are already many examples of AI robots, some more intelligent than others.

In the Beginning

Current-generation AI robots are already highly adept at multiple tasks, including repetitive manufacturing functions, autonomous shelf stocking in warehouses and grocery stores, surgery, self-driving vehicles, bomb disposal, and drone control, Cooke observes.

Wayne Butterfield, a partner at technology research and advisory firm ISG, notes that most current-generation AI robots are dedicated to performing only a single task. “We haven’t yet cracked the code on multi-modal, multitasking,” he states. Yet next-generation AI robots will need more than just an ability to move around. “They will also have to be able to see and communicate, adding further complexity to already difficult challenges.”

One of the most promising aspects of AI robots will be freeing humans to engage in more interesting, creative, and intellectually stimulating work -- or enjoy extra leisure time. “By conducting repetitive, dirty, or even dangerous tasks that humans can’t or don’t want to do, AI robots could allow humans to focus on innovation and problem-solving,” Cooke says. “As technology continues to evolve, AI robots will play an increasingly significant role in our lives and society.”

Robot progress will likely be viewed by most people as evolutionary rather than revolutionary, Erickson says. Yet progress will occur swiftly. Most current robots need to be upgraded by experts, she observes. Generative AI promises to democratize robotics AI and transform the industry. “Generative AI approaches that feed on large language model (LLM) technology and generate text and human-like knowledge will seem to approximate humans on multiple fronts -- in body, movement and mind,” Erickson explains.

Getting Along

AI robots will need to be evaluated not just in terms of how well they perform their assigned tasks, but how they do so while interacting with humans. “This is the essence of human factors and human systems integration applied to AI robots,” Cooke says.

Critical robot factors, Cooke says, include understanding how their behavior affects human safety, productivity, and satisfaction in the tasks they perform; how they interact with other robots or humans; and how their physical form and the environment in which they work affect performance.

Human factors also require designing robots that work within the natural capabilities and abilities of their human users. “For example, an AI robot should be able to recognize and understand directions and input from humans, which may come in the form of keyboard commands, speech, gestures, and even facial expressions,” Cooke says.

AI robots will also need to be “smart” enough to not be confused by multiple interactions with different people, and to be user-friendly enough to not overwhelm the human they are interacting with or taking direction from, Cooke says.

Stranger Danger

As AI robots begin handling a growing number of tasks, many in close proximity to people and, in some cases, collaborative situations, safety concerns will need to be addressed.

Depending on the tasks they’re given, and their capabilities, robots can create both safety and ethical concerns, Cooke warns. Self-driving cars, for example, have already been responsible for several deaths. Meanwhile, the number of robot-related industrial accidents is increasing. Additionally “deaths of despair” are growing as robots begin replacing workers in industrial jobs.

Even more ominous are AI-driven autonomous weapon systems. The US military doesn’t allow the use of such systems, ensuring that life or death decisions are always made by humans. Yet this policy may not be adopted by all nations, possibly leading to teams of AI robot warriors.

Getting to Know Who?

To ensure that AI robot performance is optimized for users, developers should take into consideration factors such as noise level, usability, comfort, and ergonomics. “These and other human factors should also be taken into account when evaluating the performance of an AI robot,” Cooke says. “They should be thoroughly tested in different scenarios prior to being put into production.” This includes testing the robot’s ability to interact effectively with users, respond appropriately to commands and instructions, and carry out tasks accurately, efficiently, and safely, she explains.

Encountering beings that have different, non-human ontology will require practice, careful consideration, and perhaps even some human-robot training, Erickson says. Much work remains, she observes. “Avenues for safe practice, for guardrails, and for safe experimenting with these new interactional companions are insufficient at this time.”

What to Read Next:

Human-Robot Teams: The Next IT Management Challenge

OpenAI CEO Sam Altman Pleads for AI Regulation

Should There Be Enforceable Ethics Regulations on Generative AI?

About the Author

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights