When Jaime Banks met Valkyrie, NASA鈥檚 humanoid robot, at a robotics conference in 2017, its creators were demonstrating the robot鈥檚 capabilities. As they showed how Valkyrie uses sensors to navigate its environment, Banks saw her face flash across the robot鈥檚 visual field. The encounter had a powerful effect on her. At the time, Banks had been studying identity and relationships between online video gamers and their avatars, but Valkyrie introduced an entirely new dimension to her work. 鈥淚 saw that robot see me and that blew my mind,鈥 says Banks, associate professor in the (iSchool) who was named the Katchmar-Wilhelm Endowed Professor in 2024. 鈥淚n that moment I felt like the robot was someone, and I wondered if people see minds in machines in the way I did.鈥
For Banks, the doors of scientific inquiry swung wide open. She was drawn to the idea of exploring mind perception, in which humanlike mental qualities are attributed to entities like robots and they are viewed as beings capable of thought, emotion or action. With the rise of social robots and generative AI chatbots, Banks became fascinated by how these technologies might connect with human cognition, behavior and attitudes. 鈥淭hey鈥檙e going to be like us someday,鈥 she thought at the time.

School of Information Studies professor Jaime Banks (center) works in LinkLab with SOURCE Fellows Rio Harper 鈥27 (left) and Gabriel Davila 鈥26, who are members of her research team. Harper built the robotic arm using 3D printing and coded it from scratch.
For Banks, the doors of scientific inquiry swung wide open. She was drawn to the idea of exploring mind perception, in which humanlike mental qualities are attributed to entities like robots and they are viewed as beings capable of thought, emotion or action. With the rise of social robots and generative AI chatbots, Banks became fascinated by how these technologies might connect with human cognition, behavior and attitudes. 鈥淭hey鈥檙e going to be like us someday,鈥 she thought at the time.
As a communication scientist, longtime gamer and cyberpunk fiction enthusiast, Banks can easily envision a future where humans coexist with machines that have personalities and act independently. She鈥檚 focused on understanding how we relate to AI-driven creations, how we perceive their humanness, and how we 鈥渕ake meaning together,鈥 as she puts it. In one project, supported by a grant from the , she investigated how mind perception and moral judgments influence trust in these relationships. 鈥淢ind perception is core to social interaction,鈥 says Banks, who also serves as the iSchool鈥檚 Ph.D. program director.
As part of the research, Banks examined interactions between study participants and Ray, a social robot (pictured in top photo) with limited functions that she uses for research in the , where she works with a group of graduate and undergraduate students who help collect and analyze data from her studies. Among her findings, she suggested that 鈥渂ad behavior is bad behavior, no matter if it鈥檚 a human or a robot doing it. But machines bear a greater burden to behave morally, getting less credit when they do and more blame when they don鈥檛.鈥

Banks examines how people create relationships and interact with AI companions.
Exploring AI Companion Relationships
Since then, Banks has expanded her explorations of human-machine communication and the evolving relationships of those interactions, including . These companions range from family-friendly social robots to romantic partners created in virtual spaces through generative AI chatbots. 鈥淟ooking at how we understand ourselves in relation to other non-human things can be scientifically and practically useful,鈥 she says. 鈥淲e can deepen our understanding of the human experience by exploring how we connect with things that are not like us.鈥
After the AI companion app Soulmate shut down in 2023, Banks surveyed past users to learn what these virtual companions meant to them and how they were affected by the loss. Many of the users characterized the loss 鈥渁s an actual or metaphorical person-loss,鈥 Banks reported in the , with some users indicating 鈥it was the loss of a loved one (a close friend, love of their life) or even of a whole social world: 鈥楽he is dead along with the family we created,鈥 including the dog.鈥
This year, with the support of a three-year grant from the , Banks plans to look at the role that mind perception plays in whether and how people benefit from AI companions. 鈥淚f we can determine whether benefits like reduced loneliness are linked to seeing the AI as someone, that can help inform the design of safer, more beneficial technologies, as well as advancing theories of companionship more generally,鈥 she says.

Along with her current studies on AI, Banks is a longtime video game enthusiast.
Assessing the Role of Large Language Models
AI companion apps are fueled by increasingly sophisticated large language models (LLMs), a type of AI that understands and generates human language. Unlike ChatGPT, these companion apps 鈥渉ave a certain amount of memory that allows them to sustain a relationship over time,鈥 Banks says. 鈥淥ne of the main concerns about the utility and appropriateness of language models is that they have fluency without comprehension.鈥
Banks is investigating how we communicate about LLMs. For example, if an LLM provides harmful advice or behaves badly, how do we judge it and hold it accountable from a moral perspective? Banks is examining how language is represented in various social and institutional contexts, from AI itself to technical, government and university documents to media. She notes that much of the language is steeped in anthropomorphism, and how we view the machines鈥攆or instance when calling them 鈥渢eammates鈥 or 鈥渟ystems鈥濃攚ill reflect whether we embrace their human qualities or disregard them. 鈥淲e use mental shortcuts constantly in our daily lives, and a lot of these shortcuts become embedded in our everyday language,鈥 Banks says. 鈥淔or instance, how does calling an AI error a 鈥榟allucination鈥 versus a 鈥榩rediction error鈥 impact how we think about the badness of that error? The ways we use humanizing terms to refer to AI could be meaningfully impacting how we make important decisions.鈥

Banks looks at the robotic arm, which her research team uses in studies.
Moving Into the Future
As Banks assesses the possibilities of the future, she considers the way futurist fiction鈥攚hich she features in her graduate course Dynamics of Human AI Interaction鈥攆rames the times ahead. 鈥淚 try to encourage the students to think about things that we can鈥檛 really wrap our heads around right now, and what questions we should be answering,鈥 she says.
Are we in for a dystopian future where AI spins out of control, or will we control it? Will AI redefine what it means to be human? As we continue to shape and be shaped by artificial intelligence, Banks believes how we as individuals engage with these social technologies is crucial鈥攖hat we need to be thoughtful and aware of the moral and ethical issues needed to coexist with them. 鈥淧art of that thoughtfulness,鈥 she says, 鈥渋s generating rigorous empirical research so we can design more ethical technologies, be educated in our interactions with them and develop good policy.鈥