Reconstructing Speech from Brain Signals

The previous post reviewed a science fiction book, The Accord by Keith Brooke, where virtual beings are created based on brain scans. It seems that scanning and decoding brain activity is getting closer to reality: A team of scientists from UC Berkeley, UC San Francisco, University of Maryland, and Johns Hopkins University managed to reconstruct individual words from brain signals of patients listening to recorded speech (B. Pasley et al., PLoS Biology 2012).

Speech reconstruction experiment paradigm

Listening to acoustic waveforms (left top) gives time-resolved signals (bottom right) recorded by probes implanted in the brain (top right). The signals are decoded into a spectrogram (bottom left). Image from original article.

The authors managed to reconstruct individual words by analyzing brain activity data. Reconstructing signals caused by live events is of cause very different from reading out the complete memory of a person as described in the sci-fi story The Accord. In fact, I doubt that memory can be accessed using electrodes. Electrodes require active, electrical signals in the brain while I suppose long-term memory is something more hard-wired. Nevertheless, the research article shows the tremendous progress science and technology is making by the combination of biology and information technology. It will be interesting to see when applications of this technology become available to, e.g., allow disabled persons to communicate better.

Last but not least I would like to thank the authors for publishing their work in an open access journal under a creative commons licence. Otherwise, I would not have been able to read this article and to legally show the picture on this blog.