Julia Berezutskaya

Postdoctoral Researcher @UMC Utrecht

About Me

I focus on computational modeling of speech processes in the human brain. I try to bring together people working on natural language processing, machine learning, computational neuroscience and clinical neuroscience so that we can build powerful models of speech production and perception in the brain. Not only are such models important for our fundamental understanding of how the brain works, they are essential for development of assistive neurotechnology and brain-computer interfaces that can restore cognitive function in patients, such as communication via decoding of attempted speech in paralyzed individuals.

I am committed to open reproducible science: I publish open-access and do my best to share code and data of every published project.

Most of my projects revolve around neuroscience, machine learning, language and brain-computer interfaces.


Open Multimodal IEEG-FMRI Dataset (63 subjects)


We were among the first teams to publicly share human intracranial brain data from a cognitive task. Sixty subjects watched a short film while their brain responses were collected using electrocorticography or stereoelectroencephalography. In addition, for many subjects we provide functional magnetic resonance imaging data acquired during the same task. We share the data for public use by researchers in cognitive neuroscience and other fields.

  • Berezutskaya, J., Vansteensel, M.J., Aarnoutse, E.J., Freudenburg, Z.V., Piantoni, G., Branco, M.P. & Ramsey, N.F. (2022). Open multimodal iEEG-fMRI dataset from naturalistic stimulation with a short audiovisual film. (article). Scientific Data. doi: 10.1038/s41597-022-01173-0

Language in Interaction


Within Big Question 1 our team works on computational modeling of language processing in the brain. I develop neural encoding and decoding models that help us understand how the brain processes speech.

  • Berezutskaya, J., Freudenburg, Z.V., Ambrogioni, L., Güçlü, U., Gerven, M.A.J. van & Ramsey, N.F. (2020). Cortical network responses map onto data-driven features that capture visual semantics of movie fragments. Scientific Reports, 10:12077. doi: 10.1038/s41598-020-68853-y

  • Berezutskaya, J., Freudenburg, Z.V., Güçlü, U., Gerven, M.A.J. van & Ramsey, N.F. (2020). Brain-optimized extraction of complex sound features that drive continuous auditory perception. Plos Computational Biology, 16 (7):e1007992. doi: 10.1371/journal.pcbi.1007992

  • Berezutskaya, J., Baratin, C., Freudenburg, Z.V. & Ramsey, N.F. (2020). High-density intracranial recordings reveal a distinct site in anterior dorsal precentral cortex that tracks perceived speech. Human Brain Mapping, 41 (16), 4587-4609. doi: 10.1002/hbm.25144

  • Berezutskaya, J., Freudenburg, Z.V., Güçlü, U., Gerven, M.A.J. van & Ramsey, N.F. (2017). Neural tuning to low-level features of speech throughout the perisylvian cortex. The Journal of Neuroscience, 37 (33), 7906-7920. doi: 10.1523/JNEUROSCI.0238-17.2017

Within WP-B our team uses recent advances in computational modeling and artificial intelligence for development of real-world clinical applications in the field on neurotechnology. My work focuses on models for speech decoding and reconstruction from intracranial brain activity. My goal is to build and validate brain-computer interface solutions for severely paralyzed individuals and provide them with means of communication with the outside world.