黑料网

News

The human brain tracks speech more closely in time than other sounds

The way that speech processing differs from the processing of other sounds has long been a major open question in human neuroscience. Researchers at Aalto University have endeavoured to answer this by investigating brain representations for natural spoken words using machine learning models and comparing them with representations of environmental sounds that refer to the same concepts 鈥 such as the word cat and a cat meowing.
Aivokuori seuraa 盲盲nen piirteit盲 hyvin t盲sm盲llisesti ymm盲rt盲盲kseen puhetta. Kuva: Aalto-yliopisto

Humans can effortlessly recognize and react to natural sounds and are especially tuned to speech. There have been several studies aimed to localize and understand the speech-specific parts of the brain, but as the same brain areas are mostly active for all sounds, it has remained unclear whether or not the brain has unique processes for speech processing, and how it performs these processes. One of the main challenges has been to describe how the brain matches highly variable acoustic signals to linguistic representations when there is no one-to-one correspondence between the two, e.g. how the brain identifies the same words spoken by very different speakers and dialects as the same.

For this latest study, the researchers, led by Professor Riitta Salmelin, decoded and reconstructed spoken words from millisecond-scale brain recordings in 16 healthy Finnish volunteers. They adopted the novel approach of using the natural acoustic variability of a large variety of sounds (words spoken by different speakers, environmental sounds from many categories) and mapping them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. These types of models, with time-resolved and time-averaged representations of the sounds, have been used in brain research before. The novel, scalable formulation by co-lead author Ali Faisal allowed for applying such models to whole-brain recordings, and this study is the first to compare the same models for speech and other sounds.  

Aalto researcher and lead author Anni Nora says, 鈥榃e discovered that time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of speech. When we hear a word, e.g. 鈥渃at鈥, our brain has to follow it very accurately in time to be able to understand its meaning鈥.

As a contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, such as music or laughter. Instead, time-averaged analysis is sufficient to reach their meanings. 鈥楾his means that the same representations (how a cat looks like, what it does, how it feels etc.) are accessed by the brain also when you hear a cat meowing, but the sound itself is analyzed as a whole, without need for similar time-locking of brain activation鈥, Nora explains.

Time-locked encoding was also observed for meaningless new words. However, even responses to human-made non-speech sounds such as laughing didn鈥檛 show improved decoding with the dynamic time-locked mechanism and were better reconstructed using a time-averaged model, suggesting that time-locked encoding is special for sounds identified as speech.

Results indicate that brain responses follow speech with especially high temporal fidelity

The current results suggest that, in humans, a special time-locked encoding mechanism might have evolved for speech. Based on other studies, this processing mechanism seems to be tuned to the native language with extensive exposure to the language environment during early development.

The present finding of time-locked encoding, especially for speech, deepens the understanding of the computations required for mapping between acoustic and linguistic representations (from sounds to words). The current findings raise the question of what specific aspects within sounds are crucial for cueing the brain into using this special mode of encoding. To investigate this further, the researchers aim next to use real-life like auditory environments such as overlapping environmental sounds and speech.

鈥楩uture studies should also determine whether similar time-locking might be observed with specialization in processing other sounds through experience, e.g. for instrumental sounds in musicians鈥, Nora says

Future work could investigate the contribution of different properties within speech acoustics and the
possible effect of an experimental task to boost the use of time-locked or time-averaged mode in sound processing. These machine learning models could also be very useful when applied to clinical groups, such as investigating individuals with impaired speech processing.

Read More:

Dynamic time-locking mechanism in the cortical representation of spoken words

Contact information:

Dr Anni Nora, Aalto University
anni.nora@aalto.fi

Professor Riitta Salmelin, Aalto University
Phone: +358 50 344 2745
riitta.salmelin@aalto.fi

  • Updated:
  • Published:
Share
URL copied!

Read more news

Modern light wood and metal building on a slope, surrounded by tall green trees under blue sky
Research & Art Published:

Aalto University presents circular economy solutions at the New European Bauhaus festival

The European Commission鈥檚 New European Bauhaus (NEB) initiative will bring together leading experts and changemakers from across Europe in Brussels this June to shape a more sustainable future.
The new ultrasonic needle allows for two to three times the quantity of聽tissue to be sampled comparative to current聽needle biopsy methods. Photo: Kalle Kataila, Aalto University.
Press releases Published:

New ultrasonic needle yields samples 2鈥3 times larger, potentially reshaping cancer diagnostics

Developed at Aalto University over several years, a new ultrasonic needle for tumour diagnostics has been trialled in collaboration with Helsinki University Hospital (HUS). According to the resulting peer-reviewed study, salivary gland tumours could be diagnosed with far greater precision using the innovative needle.
Two people talk at a busy indoor event, standing among a crowd under warm wooden ceiling.
Research & Art, Studies Published:

Master the Room: Real-World Networking for Researchers - workshops in May & June

Hands-on workshops for doctoral students and researchers on building professional networking skills on 28.5. and 11.6.
Band performing on stage, singer in bright pink skirt, guitarist in black, crowd lights twinkling behind
Cooperation, Press releases, Research & Art Published:

Music industry stakeholders: the industry鈥檚 value will double by 2040 through large-scale equality initiatives

The industry aims to establish a self-regulatory body and double the value of the music industry, as outlined in the report 鈥淎n Equal Music Industry in Finland by 2040鈥, to be published 11 May.