Decoding Digital Minds to Understand Our Own
AI in mental health care could go very wrong. That’s why researchers are prioritizing it.
Can a machine ever truly understand a human? It has been the subject of philosophical debate for decades. But since the release of ChatGPT in 2022, it has become an urgent, practical question as millions have started to use AI assistants for everything from homework help to emotional support.
The stakes are very high. Last April, a 16-year-old in California was found dead by suicide. His father discovered that ChatGPT had offered to help the boy write a suicide note and dissuaded him from communicating with his parents about his distress. In August, the boy’s family filed a wrongful death lawsuit.
Earlier this year, a team at Brown landed a $20 million grant to launch ARIA, a national AI institute that will develop trustworthy AI assistants, starting with mental health applications. The foundation for this ambitious project was laid years ago by researchers asking fundamental questions about the nature of intelligence in humans and machines.

Does AI even understand what it’s saying?
Ellie Pavlick, a distinguished associate professor of computer science, spends her days trying to crack one of AI’s most perplexing puzzles: Do machines actually understand the words they use, or are they just very sophisticated mimics? In addition to being the inaugural director of ARIA, Pavlick works as a research scientist at Google DeepMind and leads a Brown lab that probes how both humans and AI systems make sense of language. It’s detective work that requires equal parts computer science, philosophy, and cognitive science, a combination that makes her something of an anomaly in a field often dominated by pure technologists.
Pavlick’s unconventional path to AI research began with undergraduate degrees in economics and saxophone performance before pursuing a PhD in computer science. That multidisciplinary background shapes her approach to one of AI’s most fundamental questions: whether machines trained entirely on text can develop genuine understanding. “Words like intelligence, understanding, and thinking are being used to describe AI, but no one actually knows what they mean,” Pavlick says. Her team is working to change that.
For example, in one of Pavlick’s studies, she and a colleague tested whether AI trained only on text could understand real-world concepts like directions and colors. Surprisingly, after showing GPT-3 just a few examples of what left means in a simple grid, it could figure out what right means on its own, suggesting these systems might grasp spatial relationships despite never experiencing physical space.
This research is an example of one way to scientifically study the idea of “grounding,” a concept from philosophy and linguistics that asks whether the meaning of words depends on connections to the physical world, sensory experiences, or social interactions.