Learning enhancement | Perception | Warnke method

Learning enhancement | Perception | Warnke method

Information on the topic of learning support and the underlying Warnke procedure. You can also find further information in our channel flyer and brochures.

Difficulty Learning Nonsense Words May Indicate a Child’s High Risk of Dyslexia

Summary: Children at high risk for dyslexia have trouble learning new words after hearing them, a new study reports. Results show those at risk of dyslexia have border difficulties in processing language in the brain, which may account for why reading difficulties occur.

Source: Aalto University

Researchers at Aalto University and the Niilo Mäki Institute have used neuroimaging to pinpoint where the brain activates – or doesn’t activate – among children identified as having a high risk of dyslexia. Magnetoencephalography (MEG) has rarely been used to study the reading disorder in children.
Musik
View all
Study shows some infants can identify differences in musical tones at six months
new research from neuroscientists at York University suggests the capacity to hear the highs and lows, also known as the major and minor notes in music, may come before you take a single lesson; you may actually be born with it.

The study, published in the Journal of the Acoustical Society of America, examined the capacity of six-month-old infants to discriminate between a major and a minor musical tone sequence with a unique method that uses eye movements and a visual stimulus.

Previous research with adults has shown that approximately 30 per cent of adults can discriminate this difference but 70 per cent cannot, irrespective of musical training. Researchers found that six-month-old infants show exactly the same breakdown as adults: approximately 30 per cent of them could discriminate the difference and 70 per cent could not.
Visit news.yorku.ca for more info.
Passende Publikationen und Fachbeiträge (allgemein)
View all
The human brain tracks speech more closely in time than other sounds
Summary: Research suggests a time-locked encoding mechanism may have evolved for speech processing in humans. The processing mechanism appears to be tuned to the native language as a result of extensive exposure to the language environment during early development.

Source: Aalto University

Humans can effortlessly recognize and react to natural sounds and are especially tuned to speech. There have been several studies aimed to localize and understand the speech-specific parts of the brain, but as the same brain areas are mostly active for all sounds, it has remained unclear whether or not the brain has unique processes for speech processing, and how it performs these processes. One of the main challenges has been to describe how the brain matches highly variable acoustic signals to linguistic representations when there is no one-to-one correspondence between the two, e.g. how the brain identifies the same words spoken by very different speakers and dialects as the same.
Can a Newborn’s Brain Discriminate Speech Sounds?
Summary: It’s a question most new parents ponder, can a newborn baby discriminate between speech sounds? Researchers found newborn babies encode voice pitch comparable to adults exposed to a new language for three years. However, there are some differences when it comes to distinguishing between spectral and temporal fine structures of certain sounds.

Source: University of Barcelona

People’s ability to perceive speech sounds has been deeply studied, specifically during someone’s first year of life. But what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?
Changing the Connection Between the Hemispheres Affects Speech Perception
Summary: Using HD-TACS brain stimulation, researchers influenced the integration of speech sounds by changing the balancing processes between the two brain hemispheres.

Source: Max Planck Institute

When we listen to speech sounds, the information that enters our left and right ear is not exactly the same. This may be because acoustic information reaches one ear before the other, or because the sound is perceived as louder by one of the ears. Information about speech sounds also reaches different parts of our brain, and the two hemispheres are specialised in processing different types of acoustic information. But how does the brain integrate auditory information from different areas?
Sport | Peak Performance (Hochleistungstraining)
View all
Head to Toe: Study Reveals Brain Activity Behind Missed Penalty Kicks
Summary: Soccer players who feel anxious at the thought of kicking a penalty kick and who miss the goal show more activity in the prefrontal cortex. Overthinking the shot, researchers say, could play a role in missing a goal.

Source: Frontiers

Are penalty shots a soccer player’s dream or nightmare? What should be an easy shot can become a mammoth task when the hopes and fears of an entire nation rest on a player’s shoulders, leading them to choke under pressure.
Dyslexia / Legasthenie
View all
Difficulty Learning Nonsense Words May Indicate a Child’s High Risk of Dyslexia
Summary: Children at high risk for dyslexia have trouble learning new words after hearing them, a new study reports. Results show those at risk of dyslexia have border difficulties in processing language in the brain, which may account for why reading difficulties occur.

Source: Aalto University

Researchers at Aalto University and the Niilo Mäki Institute have used neuroimaging to pinpoint where the brain activates – or doesn’t activate – among children identified as having a high risk of dyslexia. Magnetoencephalography (MEG) has rarely been used to study the reading disorder in children.

In this channel you will find information about our subject area LEARNING PROMOTION | PERFORMANCE | Warnke method