Space The Final Frontier
02 Hours ago
US engineers have developed an Artificial Intelligence (AI)-enabled system that can translate brain signals into intelligible speech, a breakthrough that may help those who cannot speak to communicate with the outside world. The study, led by Columbia University researchers, showed that by monitoring one’s brain activity, an AI-enabled technology can reconstruct words a person hears with unprecedented clarity, according to news sources. A team of neuroscientists from the university trained a voice synthesiser or vocoder to measure the brain activity patterns of epilepsy patients already undergoing surgery while the patients listened to sentences spoken by different people. Those patients listened to speakers reading digits between zero to nine while recording brain signals via the vocoder. Then, they used a neural network, a type of artificial intelligence, to analyse the signals, and gave robotic sounding voices. “We found that people could understand and repeat the sounds about 75 per cent of the time, which is well above and beyond any previous attempts,” says Nima Mesgarani, from the university. Previous research showed that when people imagine speaking, patterns of activity take place in the brain and those patterns of signals also emerge when we listen to someone speak or imagine listening.