For most of human history, the way to get custom shapes and colors onto one’s retinas was to draw it on a cave wall, or a piece of parchment, or on paper. Later on, we invented electronic displays and used them for everything from televisions to computers, even toying with displays that gave the illusion of a 3D shape existing in front of us. Yet what if one could just skip this surface and draw directly onto our retinas?
Admittedly, the thought of aiming lasers directly at the layer of cells at the back of our eyeballs — the delicate organs which allow us to see — likely does not give one the same response as you’d have when thinking of sitting in front of a 4K, 27″ gaming display to look at the same content. Yet effectively we’d have the same photons painting the same image on our retinas. And what if it could be an 8K display, cinema-sized. Or maybe have a HUD overlay instead, like in video games?
In many ways, this concept of virtual retinal displays as they are called is almost too much like science-fiction, and yet it’s been the subject of decades of research, with increasingly more sophisticated technologies making it closer to an every day reality. Will we be ditching our displays and TVs for this technology any time soon?
A Complex Solution to a Simple Question
The Mark I human eye is a marvel produced through evolutionary processes over millions of years. Although missing a few bug fixes that were included in the cephalopod eye, it nevertheless packs a lot of advanced optics, a high-density array of photoreceptors, and super-efficient signal processing hardware. Before a single signal travels from the optic nerve to the brain’s visual cortex, the neural network inside the eye will have processed the incoming visual data to leave just the important bits that the visual cortex needs.
The basic function of the eye is to use its optics to keep the image of what is being looked at in focus. For this it uses a ring of smooth muscle called the ciliary muscle to change the shape of the lens, allowing the eye to change its focal distance, with the iris controlling the amount of light that enters the eye. This enables the eye to focus the incoming image onto the retina so that the area with the most photorecepters (the fovea centralis) is used for the most important thing in the scene (the focus), with the rest of the retina used for our peripheral vision.
The simple question when it comes to projecting an image onto the retina thus becomes: how to do this in a way that plays nicely with the existing optics and focusing algorithms of the eye?
Giving the Virtual a Real Place
In the naive and simplified model of virtual retinal display technology, three lasers (red, green and blue, for a full-color image) scan across the retina to allow the subject to perceive an image as if its photons came from a real life object. As we have however noted in the previous section, this is not what we’re working with in reality. We cannot directly scan across the retina, as the eye’s lens will diffract the light, a diffraction that changes as the eye adjusts its focal length.
The only part of the retina that we’re interested in is also the fovea, as it is the only section of the retina where there is a dense cluster of cones (the photoreceptors capable of sensing the frequency of light, i.e. color). The rest of the retina is only used for peripheral vision, with mostly (black and white sensing) rods and very few cones. To get clearly identifiable images projected onto a retina, we have a 1.5 mm wide fovea, with the 0.35 mm in diameter foveola providing the best visual acuity.
Hitting this part of the retina requires that the subject either consciously focuses on the projected image in order to perceive it clearly, or adjust for the focal distance of the eye at any given time. After all, to the eye all photons are assumed to come from a real-life object, with a specific location and distance. Any issues with this process can result in eyestrain, headaches and worse, as we have seen with tangentially related technologies such as 3D movies in cinemas as well as virtual reality systems.
Smart Glasses: Keeping Things Traditional
Most people are probably aware of head-mounted displays, also called ‘smart glasses’. What these do is create a similar effect to what can be accomplished with virtual retinal display technology, in that they display images in front of the subject’s eyes. This is used for applications like augmented (mixed) reality, where information and imagery can be super-imposed on a scene.
Google made a bit of a splash a few years back with their Google Glass smart glasses, which use special, half-silvered mirrors to guide the projected image into the subject’s eyes. Like the later Enterprise versions of Google Glass, Microsoft is targeting their HoloLens technology at the professional and education markets, using combiner lenses to project the image on the tinted visor, similarly to how head-up displays (HUDs) in airplanes work.
Magic Leap’s Magic Leap One uses waveguides that allow an image to be displayed in front of the eye, on different focal planes, akin to the technology used in third generation HUDs. Compared to the more futuristic looking HoloLens, these look more like welding goggles. Both the HoloLens and Magic Leap One are capable of full AR, whereas the Google Glass lends itself more as a basic HUD.
Although smart glasses have their uses, they’re definitely not very stealthy, nor are most of them suitable for outdoor use, especially during sunny weather and hot summer weather. It would be great if one could skip the cumbersome head strap and goggles or visor. This is where virtual retinal displays (VDRs) come into play.
Painting with Lasers and Tiny Mirrors
Naturally, the very first question that may come to one’s mind when hearing about VDRs is why it’s suddenly okay to shine not one but three lasers into your eyes? After all, we have been told to never, not even once, point even the equivalent of a low-powered laser pointer at a person, let alone straight at their eyes. Some may remember the 2014 incident at the Burning Man festival where festival goers practically destroyed the sight of a staff member with handheld lasers.
The answer to these concerns is that very low-powered lasers are used. Enough to draw the images, not enough to do more than cause the usual wear and tear from using one’s eyes to perceive the world around us. As the light is projected straight onto the retina, there is no image that can become washed out in bright sunlight. Companies like Bosch have prototypes of VRD glasses, with the latter recently showing off their BML500P Bosch Smartglasses Light Drive solution. They claim an optical output power of <15 µW.
Bosch’s solution uses RGB lasers with a MEMS mirror to direct the light into the subject’s pupil, and onto the retina. However, one big disadvantage of such a VRD solution is that it cannot just be picked up and used like one can with the previously mentioned smart glasses. As discussed earlier, VRDs need to precisely target the fovea, meaning that a VRD has to be adjusted to each individual user to work or else one will simply see nothing as the laser misses the target.
Much like the Google Glass solution, Bosch’s BML500P is mostly useful for HUD purposes, but over time this solution could be scaled up, with a higher resolution than the BML500P’s 150 line pairs and in a stereo version.
The Future is Bright
The cost of entry in the AR and smart glasses market at this point is still very steep. While Google Glass Enterprise 2 will set you back a measly $999 or so, HoloLens 2 costs $3,500 (and up), leading some to improvise their own solution using beam splitters dug out of a bargain bin at a local optics shop. Here too the warning of potentially damaging one’s eyes cannot be underestimated. Sending the full brightness of a small (pico)projector essentially straight into one’s eye can cause permanent damage and blindness.
There are also AR approaches that focus on specific applications, such as tabletop gaming with Tilt Five’s solution. Taken together, it appears that AR — whether using the beam splitter, projection or VRD approach — still is in a nascent phase. Much like virtual reality (VR) a few years ago, it will take more research and development to come up with something that checks all the boxes for being affordable, robust and reliable.
That said, there definitely is a lot of potential here and I, for one, am looking forward to seeing what comes out of this over the coming years.