Ask Slashdot: What Will Language Be Like In a Future 'Human-Machine Era'?
Long-time Slashdot reader united_notions is trying to envision "the 'human-machine era', a time when the tech has moved out of our hands and into our ears, eyes, and brains."Real-time captioning of conversation. Highly accurate instant translation. Auto voice mimicry making it sound like you speaking the translation. Real-time AR facial augmentation making it also look like you speaking the translation. Meanwhile, super-intelligent Turing-passing chatbots that look real and can talk tirelessly about any topic, in different languages, in anyone's voice. Then, a little further into the future, brain-machine interfaces that turn your thoughts into language, saving you the effort of talking at all... Slashdot has long reported on the development of all these technologies. They are coming. When these are not futuristic but widespread everyday devices, what will language and interaction actually be like? Would you trust instant auto-translation while shopping? On a date? At a hospital? How much would you interact with virtual characters? Debate with them? Learn a new language from them? Socialise with them, or more? Would you wear a device that lets you communicate without talking? And with all this new tech, would you trust tech companies with the bountiful new data they gather? Meanwhile, what about the people who get left behind as these shiny new gadgets spread? As always with new tech, they will be prohibitively expensive for many. And despite rapid improvements, still for some years progress will be slower for smaller languages around the world - and much slower still for sign language - despite the hype. "Language in the Human-Machine Era" is an EU-funded research network putting together all these pieces. Watch our animations setting out future scenarios, read our open access forecast report, and contribute to our big survey!
Read more of this story at Slashdot.