Huberman Lab
Huberman Lab

How Your Thoughts Are Built & How You Can Shape Them | Dr. Jennifer Groh

November 10, 2025 • 2h 16m

Summary

⏱️ 8 min read

Overview

Dr. Jennifer Groh, a professor of psychology and neuroscience at Duke University, explores how the brain integrates sensory information—particularly vision and hearing—to create our experience of the world. The conversation delves into sound localization, the dynamic nature of sensory maps in the brain, multisensory integration, and ultimately arrives at a fascinating theory of what thoughts actually are: simulations run using our sensory-motor brain infrastructure. The discussion also covers practical applications for focus, attention, and cognitive performance.

Sound Localization and the Superior Colliculus

Dr. Groh explains how the brain determines where sounds are coming from, beginning with her fascination with the superior colliculus—a brain structure responsive to both visual and auditory stimuli. A remarkable finding is that neurons' responses to auditory stimuli depend on where the eyes are looking, with receptive fields shifting as the eyes move. This creates dynamic maps that update constantly as we move our eyes, requiring sophisticated computational processes happening beneath our conscious awareness.

  • The superior colliculus is responsive to both visual and auditory stimuli, with auditory responses depending on eye position
  • Neurons' receptive fields shift as the eyes move, creating dynamic spatial maps
  • The brain does massive computation under the hood to prevent us from experiencing smeared, shifting visual scenes with each eye movement
" Every time your eyes move, the visual scene is shifting massively on the retina. But we don't even notice this. And this is an indication that the brain is doing a ton of computation under the hood to give us that perceptual experience. "

How We Localize Sound: Physics and Computation

The way we determine where sounds come from relies on tiny timing differences and intensity differences between our two ears. The maximum delay is only about half a millisecond—less than the duration of a single action potential—yet our brain can process spatial information at this incredible temporal precision. This involves specialized synapses with minimal delay and populations of neurons firing together, demonstrating remarkable computational power.

  • Sound localization depends on differential delays and intensity differences between the two ears
  • The maximum timing difference between ears is about half a millisecond—less than a single action potential duration
  • The brain uses precise synapses and populations of neurons firing together to achieve this temporal precision
  • As children grow, they must continuously relearn sound localization as their head size changes

📚 6 more sections below

Sign up to unlock the complete summary with all insights, key points, and quotes