Thinking Aloud

 
 

With significant advancements being made in the area, James Kelly looks at how science is giving a voice to your inner thoughts

As you read these words an inner narrative almost speaks them to you. Soon, that little mental voice may be heard by others too, as what is essentially technological telepathy becoming a real possibility. By monitoring the activity between certain populations of neurons within the brain when speech is heard or words are thought of, then feeding that data through sophisticated language algorithms, researchers have succeeded in giving a voice to someone else’s inner thoughts. The latest work in the field has even gone beyond speech, to the level of meaning. While the work is still in its infancy, it is progressing rapidly and the results are startling.

Apart from giving new insight into exactly how our brains process speech, the technology could have enormous implications for those left speechless, through paralysis or Locked-In syndrome, offering a more fluent channel for that most important need: communication.

The first major proof of concept work was done by Professor Bradley Greger and his team at Utah University in 2010. In 2009, the same team published work on reading the neural signals responsible for arm movement. In this experiment, Professor Greger and his research group focused on areas of the brain involved in speech.

Two button-sized grids, each containing 16 electrodes, were attached to the surface of the brain in areas associated with producing speech. One was attached to the area of the motor cortex responsible for movement of facial muscles, while the other was attached to Wernicke’s area, a part of the brain associated with language processing. Accessing the surface of the brain was possible, as the test subject had had part of his skull removed for surgery related to epilepsy. As the electrodes do not penetrate the brain matter, they are considered safe.

It is currently held that thinking a word and speaking a word activate some of the same signals within the particular areas of the brain, these are the areas the team investigated. Using the electrodes, the subject repeatedly read a list of ten words that would be useful to a person suffering from paralysis: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less. The subject then thought of words from the list, and the computer processed the recorded neural signal to determine and repeat the words. An accuracy of between 76%–90% was achieved, which is astonishing considering what was done. Readings taken from the motor cortex electrodes proved to be slightly more accurate than those taken from Wernicke’s area.

Another researcher who has attempted to solve the problem at the level of muscle control is Frank Guenther, at Boston University, Massachusetts. His team has already implanted electrodes within the motor cortex regions controlling the shapes of the mouth, lips and larynx of a paralysed patient. Their current system has worked in producing vowel sounds, but failed to translate to full word detection. It is promising, but far from ideal.

Further development came in 2011, when a team led by Brian Pasley at University of California, Berkeley, developed a process whereby multiple electrodes assess different aspects of sound processing within the brain and an algorithm learns from these, producing a spectrogram (like a graph of sound frequencies) that then acts as a reference for other words that are though of.

While the electrodes used were similar to those used by Greger, their placement and the reasoning behind it was different. The electrodes were placed on the brain surface of 15 subjects, all undergoing surgical treatment for epilepsy, in areas of the temporal lobe involved in the processing of sound, as opposed to the formation of speech. Pasley’s subjects then listened to words, to calibrate the whole system and produce spectrograms for comparison.

Sounds within speech are made up of different frequencies, which are separated in the brain and processed by discrete groups of neurons. One spot of cells may be responsible for frequencies around 1000Hz while another deals only with those around 500Hz. By matching a group of neurons with its corresponding frequency and then monitoring the activity of the group, the team could tell what frequencies were being heard and when. Other important aspects of speech, such as rhythm of syllables and fluctuation in frequencies, were taken into account. The information was then feed into a processor, and compared to the spectrograms produced during calibration to determine what word the subject was thinking.

With computational developments still on their exponential up-and-up, the only limiting factor in the feasibility of this plan is being able to build a wireless electrode small enough to implant, that’s powerful enough to penetrate the skull while having sufficiently long battery life to not require changing. Hopefully such developments will be quick in coming, so that the frustration and pain of those suffering from such conditions can be lessened.

A Dutch team, led by Joao Correia, at Maastricht University, seeks to go beyond the brain’s representations of the words themselves and see what it is that underlies their meaning and where it’s centred. The reading of meaning is a recent segway from the speech-centred studies of the last few years. It could prove the new test for the level of awareness in those suffering from neural injury or even those technically brain-dead, as the ability to detect the processing of meaning becomes a possibility.

Correia took 8 bilingual subjects, and measured neural activity in a functional magnetic resonance imaging machine (fMRI – it can measure the level of activity within a discrete area of the brain and image it) while they listened to the English names of four animals: bull, shark, duck and horse. The words were chosen because they’re monosyllabic, would have been learned around the same time and all belong to the same category, animals. The activity in the left temporal lobe was monitored, and an algorithm was used to learn which pattern of activity related to each animal. The subjects then listened to the words in Dutch (which sound completely different to the English versions, i.e. paard and horse) to eliminate the possibility that the processing of the sounds of the words was what was being detected. This ensured that only the processing of the meaning of the words was being observed. While the areas activated by each word were slightly different between subjects, for each subject the same area was activated regardless of the language used.

This distinction between processing the word and processing its meaning could radically alter what we currently deem as consciousness. The technique could be used to determine if the higher level processing of meaning is taking place in a person who is seemingly unconscious or brain-dead. From there, communication and possibly improved rehabilitation may be but a stone’s throw away. It may even have Doctor Dolittle-esque implications.

Huge strides are being made in the reading of thought and the technology is being developed to apply these finding practically, to the benefit of those who suffer so desperately. However, a lot has yet to be done and while there are great prospects, the only certainty is that short of access to an fMRI, a couple of PhDs and your cooperation, no one is reading your thoughts any time soon.

Advertisements