Date: Monday, February 1st, 1999, 00:00
University of Southern California biomedical engineers have created the world’s first machine system that can recognize spoken words better than a human can.
A critical difference in the way a neural network computes data is in how they are configured to imitate the brain’s system of information processing. Data are structured not by a central processing unit but by an interlinked network of simple units called neurons. Rather than being programmed, neural nets learn to do tasks through a training regimen in which desired responses to stimuli are reinforced and unwanted ones are not.
In the benchmarking system tested by the engineers, USC’s Berger-Liaw Neural Network Speaker Independent Speech Recognition System bested all existing computer speech recognition and outperformed the keenest human ears.
The system might soon facilitate better voice control of computers, aid air traffic controllers, help the deaf, and instantly produce clean transcripts of conversations. The US Navy might also be interested in applying this technology to sonar systems.
An online demonstration of the system is available.