
Scientists have discovered that the human mind understands spoken language in a surprisingly comparable option to superior AI methods.
A brand new examine means that the human mind understands spoken language by an ordered collection of steps that intently resemble how superior AI language fashions work. By recording mind exercise whereas individuals listened to a spoken story, researchers discovered that later mind indicators matched deeper layers of AI methods, particularly in main language areas like Broca’s space. The outcomes query long-standing rule-based explanations of language and are supported by a newly launched public dataset that provides an essential new device for finding out how the mind creates which means.
Mind Exercise Mirrors AI Language Fashions
The analysis, printed in Nature Communications, was led by Dr. Ariel Goldstein of the Hebrew University in collaboration with Dr. Mariano Schain from Google Research, along with Prof Uri Hasson and Eric Ham from Princeton University. The team uncovered an unexpected link between how humans interpret spoken language and how modern AI models process text.
Using electrocorticography recordings from people listening to a thirty-minute podcast, the researchers tracked brain responses with high precision. Their analysis showed that language processing in the brain unfolds in a structured sequence that closely matches the layered design of large language models such as GPT-2 and Llama 2.
How the Brain Builds Meaning Over Time
As someone listens to speech, the brain does not grasp meaning all at once. Instead, each word moves through a series of neural stages. Goldstein and his colleagues found that these stages develop over time in a way that closely parallels how AI language models operate. Early layers in AI focus on basic word features, while deeper layers combine context, tone, and overall meaning.
The same pattern appeared in the brain. Early brain responses lined up with the early stages of AI processing, while later responses matched deeper AI layers. This timing relationship was especially strong in advanced language areas such as Broca’s area, where peak brain activity occurred later when associated with deeper model layers.
According to Dr. Goldstein, “What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Even though these systems are built very differently, both seem to converge on a similar step-by-step buildup toward understanding.”
Why the Findings Matter
The results suggest that artificial intelligence is more than just a text-generating tool. It may also help scientists better understand how the human brain processes meaning. For many years, language comprehension was thought to depend on fixed symbols and rigid linguistic rules. This study challenges that idea and instead supports a more flexible, data-driven view in which meaning develops gradually through context.
The researchers also tested traditional linguistic elements such as phonemes and morphemes. These features did not explain real-time brain activity as well as the contextual representations generated by AI models. This supports the idea that the brain relies on broader context rather than strictly defined language units.
A New Open Resource for Neuroscience
To help advance research in this area, the team has released the complete set of neural recordings along with related linguistic features. By making this dataset publicly available, scientists around the world can compare different theories of language understanding and develop computational models that more closely reflect how the human brain works.
Reference: “Temporal structure of natural language processing in the human brain corresponds to layered hierarchy of large language models” by Ariel Goldstein, Eric Ham, Mariano Schain, Samuel A. Nastase, Bobbi Aubrey, Zaid Zada, Avigail Grinstein-Dabush, Harshvardhan Gazula, Amir Feder, Werner Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Michael Brenner, Avinatan Hassidim, Yossi Matias, Orrin Devinsky, Noam Siegelman, Adeen Flinker, Omer Levy, Roi Reichart and Uri Hasson, 26 November 2025, Nature Communications.
DOI: 10.1038/s41467-025-65518-0
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.














