A new generation of convolution neural networks could be used to identify human speech.
Key points:Researchers say it could provide a way to learn speech patterns from the brain’s natural speech recallThe network could help in the detection of speech-disorder conditions like autismThe new technology could provide clues to identifying speech disorders like autism.
The development of convolutions is the brainchild of Dr Michael E. Lee and his research group at Stanford University.
The team has created an artificial neural network, or an artificial brain, that mimics the workings of the human brain.
“We know very little about the brain and its neural network,” Dr Lee told ABC News.
“Our work aims to shed light on the brain, and what makes it so powerful.”‘
I feel like I’m being watched’The network has been created by a team of researchers led by Dr Lee, and was recently featured in a paper in the journal Nature Neuroscience.
Dr Lee’s team has used a convolution algorithm to create an artificial network that can mimic human speech patterns.
“It’s kind of a super-advanced version of what we’ve done in the past, where we have very large numbers of neurons that are connected, and they’re doing some pretty sophisticated computations, which means they can learn some of these things very quickly,” he said.
“But the downside is that there’s some learning that is not very fast.”
Dr Lee said the new system would be useful in identifying speech conditions, such as autism.
“One of the things that we see as a symptom of autism is the difficulty with learning speech,” Dr Li said.
In a previous paper, Dr Lee and co-author Jiajia Chen reported a new generation with a higher-resolution model of the brain than previous models.
“This model shows us that the brain is more than just a set of neurons, it has a whole lot of different layers,” Dr Chen said.
The researchers hope to see the new model be used in other speech-recognition tasks, such the recognition of human faces.
“The goal is to have a new way to recognise speech,” said Dr Lee.
“That would enable us to find the exact patterns of speech, and the correct speech pattern for that person.”
For the first time, Dr Li hopes to use the system to identify speech disorders.
“Currently we don’t have a way of understanding how speech is expressed in the brain.
We know a lot about how we communicate with our ears and the muscles, but we don, in essence, have a lot of information about what is actually happening in the head,” he explained.”
In order to understand the brain we need to learn a lot more about the mind, and how the mind processes language, which we don the same thing.”