'Sleep Language' Could Enable Communication During Lucid Dreams
Researchers have developed a "language" called Remmyo, which relies on specific facial muscle movements that can occur during rapid eye movement (REM) sleep. People who are capable of lucid dreaming can learn this language during their waking hours and potentially communicate while they are asleep. Ars Technica reports: "You can transfer all important information from lucid dreams using no more than three letters in a word," [sleep expert Michael Raduga], who founded Phase Research Center in 2007 to study sleep, told Ars. "This level of optimization took a lot of time and intellectual resources." Remmyo consists of six sets of facial movements that can be detected by electromyography (EMG) sensors on the face. Slight electrical impulses that reach facial muscles make them capable of movement during sleep paralysis, and these are picked up by sensors and transferred to software that can type, vocalize, and translate Remmyo. Translation depends on which Remmyo letters are used by the sleeper and picked up by the software, which already has information from multiple dictionaries stored in its virtual brain. It can translate Remmyo into another language as it is being "spoken" by the sleeper. "We can digitally vocalize Remmyo or its translation in real time, which helps us to hear speech from lucid dreams," Raduga said. For his initial experiment, Raduga used the sleep laboratory of the Neurological Clinic of Frankfurt University in Germany. His subjects had already learned Remmyo and were also trained to enter a state of lucid dreaming and signal that they were in that lucid state during REM sleep. While they were immersed in lucid dreams, EMG sensors on their faces sent information from electrical impulses to the translation software. The results were uncertain. Based on attempts to translate planned phrases, Remmyo turned out to be anywhere from 13 to 81 percent effective, and in the interview, Raduga said he faced skepticism about the effectiveness of the translation software during the peer review process of his study, which is now published in the journal Psychology of Consciousness: Theory, Research and Practice. He still looks forward to making results more consistent by leveling up translation methods in the future. "The main problem is that it is hard to use only one muscle on your face to say something in Remmyo," said Raduga. "Unintentionally, people strain more than one muscle, and EMG sensors detect it all. Now we use only handwritten algorithms to overcome the problem, but we're going to use machine learning and AI to improve Remmyo decoding."
Read more of this story at Slashdot.