Sense and Sensoribility
I'm writing a book about augmented reality, which forced me to confront a central question: When will this technology truly arrive? I'm not talking about the smartphone-screen versions offered up by the likes of Poki(C)mon Go and Minecraft Earth, but in that long-promised form that will require nothing more cumbersome than what feels like a pair of sunglasses.
Virtual reality is easier. It can now be delivered, in reasonable quality, for a few hundred dollars. The nearest equivalent for AR, Microsoft's second-generation HoloLens, costs an order of magnitude more while visually delivering a lot less. Ivan Sutherland's pioneering Sword of Damocles AR system, built in 1968, is more than a half-century old, so you might expect that we'd be further along. Why aren't we?
Computation proved to be less of a barrier to AR than anyone believed back in the 1960s, as general-purpose processors evolved into application-specific ICs and graphics processing units. But the essence of augmented reality-the manipulation of a person's perception-cannot be achieved by brute computation alone.
Connecting what's inside our heads to what is outside our bodies requires a holistic approach, one that knits into a seamless cloth the warp of the computational and the weft of the sensory. VR and AR have always lived at this intersection, limited by electronic sensors and their imperfections-all the way back to the mechanical arm that dangled from the ceiling and connected to the headgear in Sutherland's first AR system, inspiring its name.
Today's AR technology is much more sophisticated than Sutherland's contraption, of course. To sense the user's surroundings, modern systems employ photon-measuring time-of-flight lidar or process images from multiple cameras in real time-computationally expensive solutions even now. But much more is required.
Human cognition integrates various forms of perception to provide our sense of what is real. To reproduce that sense, an AR system must hitch a ride on the mind's innate workings. AR systems focus on vision and hearing. Stimulating our eyes and ears is easy enough to do with a display panel or a speaker situated meters away, where it occupies just a corner of our awareness. The difficulty increases exponentially as we place these synthetic information sources closer to our eyes and ears.
Although virtual reality can now transport us to another world, it does so by effectively amputating our bodies, leaving us to explore these ersatz universes as little more than a head on a stick. The person doing so feels stranded, isolated, alone, and all too frequently motion sick. We can network participants together in these simulations-the much-promised "social VR" experience-but bringing even a second person into a virtual world is still beyond the capabilities of broadly available gear.
Augmented reality is even harder. It doesn't ask us to sacrifice our bodies or our connection to others. An AR system must measure and maintain a model of the real world sufficient to enable a smooth fusion of the real with the synthetic. Today's technology can just barely do this, and not at a scale of billions of units.
Like autonomous vehicles (another blend of sensors and computation that looks easier on paper than it proves in practice), augmented reality continues to surprise us with its difficulties and dilemmas. That's all to the good. We need hard problems, ones that can't be solved with a straightforward technological fix but require deep thought, reflection, insight, even a touch of wisdom. Getting to a solution means more than building a circuit. It means deepening our understanding of ourselves, which is always a good thing.
When All Reality Is Virtual Photo: Jamie MacFadyenWe're pleased to announce the debut, in this issue, of a new column, Macro & Micro. Perhaps you've heard of its author, Mark Pesce. If not, prepare to be impressed.
An early milestone in his engineering career was his founding, in 1991, of Ono-Sendai Corp., named after a fictional company in William Gibson's science-fiction classic Neuromancer (Ace, 1984). In the real world, Ono-Sendai became the world's first consumer virtual-reality startup.
Pesce was one of the inventors of the orientation sensor that Sega Corp. adopted for its Sega-VR head-mounted display. Also, he and others developed the Virtual Reality Modeling Language (VRML).
In 1996, Pesce cofounded BlitCom, the first company to use VRML to deliver streaming 3D entertainment over the Web. Two years later, Pesce helped create the graduate program in interactive media at the University of Southern California. Not long afterward, he was invited to Sydney to develop a postgraduate program in interactive and emerging media at the Australian Film Television and Radio School. Pesce soon made his home in Sydney, where he now serves as entrepreneur-in-residence at the University of Sydney's Incubate program.
In addition to being an engineer and a teacher, Pesce is also a popularizer. In 2005, the Australian Broadcasting Corp. invited him to become a panelist and judge on the television series "The New Inventors." In 2012, Pesce published his sixth book, The Next Billion Seconds (Blurb Books), which explores a world where everyone is "hyperconnected." In 2014, he and Jason Calacanis launched the podcast "This Week in Startups Australia." Later Pesce started "The Next Billion Seconds" podcast. And since 2014, he's been a columnist for The Register. Somehow, he also finds time to consult on blockchain-based technologies for banks and fintech firms.
At the end of 2017, the Meanjin Quarterly published Pesce's essay, "The Last Days of Reality," which describes a future in which it becomes impossible to know what is true. Well folks, we're there. We hope that Pesce's columns in IEEE Spectrum will help you to navigate that new reality.