3 Ways Computer Vision Will Put the Human in AI in 2023

The computer vision community seeks to integrate cognitive processing, advance trustworthy algorithms, and address ethical considerations in this new year.

Rama Chellappa, Bloomberg Distinguished Professor at The Johns Hopkins University

January 2, 2023

4 Min Read
Artificial intelligence technology and machine learning as a technology computer engineering.
Brain Light via Alamy Stock

Computer vision has escalated its impact on artificial intelligence research, according to the 2022 AI Index Report from Stanford University. In fact, the report calls out an increased interest in “computer vision subtasks, such as medical image segmentation and masked-face identification.” But this shift in focus may mean a movement toward more practical applications, something the industry will see more of this year.

As AI’s intersection with computer science and engineering disciplines continues to climb, so, too, do the complications surrounding its implementation and use. At the Conference on Computer Vision and Pattern Recognition (CVPR) 2022, this cause and effect gave way to three key motivations for research and technology development that will continue throughout 2023:

1. Integrating cognitive considerations

At CVPR 2022, Josh Tennebaum, professor in the Department of Brain and Cognitive Sciences at MIT, discussed the ways the human brain processes information and how that experience extends beyond data inputs and evaluations.

“From a human cognitive point of view, intelligence is about so much more [than function approximation and pattern recognition]. In particular, it’s about modeling the world; and I mean modeling the world, not just the data,” Tennebaum remarked. “There’s a sense in which seeing the ‘human way’ is basically making sense of the world in all these ways that people do, from the light coming into our eyes, or our cameras.”

Exploring this train of thought means that the intersection of AI, computing, language processing, auditory analysis, and much of neuroscience will be pivotal to introducing more accurate and intelligent AI.

“We’ve only touched the beginning of integrative AI,” said CVPR 2022 speaker Xuedong Huang, technical fellow and chief technology officer at Azure AI. “The challenge for this community is what is the next GUI [graphical user interface] moment? When Steve Jobs from Apple took his people to Xerox PARC, everyone saw the value of GUI. That movement completely changed the industry. I would say integrative AI, through API, can prepare for the next GUI moment.”

2. Solving for trustworthy AI

According to the 2022 AI Index Report, as large data sets continue to produce new technical benchmarks, they also introduce a higher level of bias. In fact, the report notes, “a 280 billion parameter model developed in 2021 shows a 29% increase in elicited toxicity over a 117 million parameter model considered the state of the art as of 2018.” As new models are employed and new applications of data emerge, the potential for bias rises along with it. However, bias reduction methods are being developed by several groups, which could eventually reduce any potential harm.

AI is fragile. Adversarial attacks will reduce the performance of AI systems. Many groups are working on modeling adversarial attacks and defending them as well. Another major concern is the distribution difference between training and test data. For example, AI techniques in healthcare have to address domain shifts in medical data acquired at different hospitals or pathology labs. Many research groups all over the world are working on methods that mitigate the domain shift between training and test data.

3. Exploring ethical and societal implications

While the exploration of trust in AI algorithms offers one set of considerations, the ethical boundaries of how this technology is applied have just as strong a focus. This issue is something the community grapples with, and one the 2022 AI Index has summed up as “the rise of AI ethics everywhere.”

For the computer vision community, that may mean shifts in how it approaches AI-connected research and the data behind it. There’s a tendency to move from real data to synthetic data if it is working, if it is effective. Cameras can only capture what has happened; whereas synthesis can produce whatever you imagine or instruct the AI to do. So, there is more variety in the synthetic data, and the privacy concerns are less.

Though technological breakthroughs continue, computing challenges are becoming more complex and increasingly interdisciplinary. As emphasis on AI escalates in computer science and engineering, the mission will be to elevate the AI experience to mimic the human one in an ethical, trustworthy manner.

“We often show that we’ve improved on the state of the art in a statistically significant and often notable way, and then we suggest that it’s doing what humans do, but that’s really dangerous,” Tennebaum concluded. “We should all be careful to distinguish between, ‘Oh, we made a small step towards a human-like, human-level thing,’ and we’re actually there.”

While technology has not yet caught up to human decisioning, dedicated attention to cognitive integration, reliable AI, and AI free of bias will ensure the community gets it there and gets it right.

About the Author(s)

Rama Chellappa

Bloomberg Distinguished Professor at The Johns Hopkins University

Rama Chellappa is the CVPR 2022 Co-General Chair and Bloomberg Distinguished Professor in electrical and computer engineering and biomedical engineering and affiliated with the Johns Hopkins Center for Imaging Science, the Center for Language and Speech Processing, the Institute for Assured Autonomy, and the Mathematical Institute for Data Science, as well as author of the recently published, Can We Trust AI? Autonomy, and the Mathematical Institute for Data Science, as well as author of the recently published, Can We Trust AI?

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights