Microsoft HoloLens’ field of view (FOV) limitation is unsurprisingly its weak point. Almost anyone remotely familiar with high-end AR headsets knows just how disappointingly limiting its estimated 35-degree FOV is to one’s immersion.
But this issue isn’t just exclusive to the HoloLens. Other high-end developer edition headsets, such as the ODG R-9, and even the recently unveiled Magic Leap One, also currently have limited FOVs.
Why do these headsets have such a limited field of view? Why is 30 to 50 degrees the default field of view range for these headsets? We can answer that by discussing the principles behind AR field of view itself.
A (Tiny) Window to Your World
The field of view is technically defined as the extent of the observable world at any given moment. For instance, with a 100-degree horizontal FOV, the center of vision can pivot a maximum of 50 degrees both left and right if the starting position is completely dead center.
The visual field of the human eye is approximately 200 degrees horizontally and 135 degrees vertically, with both eyes overlapping their FOV range to provide an immediate sense of depth to our brain. This binocular field of vision allows us to see a huge portion of our landscape to easily detect the tiniest motion within the surroundings. The challenge therefore, is to take this dynamic, ever-changing, ever-updating natural field of vision, and translate it to bits and data.
For current and future AR headsets to work out their own field of view, they need to fulfill these three basic criteria:
- Imaging technology optimized for direct human vision (not just viewing at a distance)
- The processing power to compute holographic information live
- The hardware to pack everything within a relatively portable scale for a wearable.
Unfortunately this is where our current endgame AR technology still falls greatly short. We might have advanced to the point of almost achieving all three, but we are nowhere close to the imaging capabilities of our eyes. We’re not even close to the powerful coordination of the brain to process what it sees accordingly.
Remember, the holographic information doesn’t just need to be processed. The device also needs to track its surroundings to update the image. Even further, it still requires to focus on what its user is doing or where they are, amidst the flood of data, in order to relay the correct image rendition.
The (Broken) Link to Your World
The solution? Think in reverse. Instead of forcing the field of view to match human ocular complexity, we simply shrink down the field of view itself.
With a smaller field of view, AR headsets will have less visual data to process at a given time. This decreases the amount of information needed to render, track, update, and calculate displayed images.
This then, frees up more processing room for even more information, and/or for packing more hardware to allow better precision and higher detail. The resulting immersive experience from that small window should, in effect, provide a perfect representation of how the virtual object should interact with the real world.
The best example of this is the ODG R9, which shrinks the field of view down to 50 degrees diagonally in order to render images that can interact with walls and other similar obstructions.
Of course, limiting the field of view for more processing power also limits the maximum size of the objects that can be rendered. After all, with a shrunken FOV, your window to augmented reality is represented in parts, instead of as a whole picture. So, this means you would most likely only see a specified portion of an object if it becomes much, much larger than the intended FOV.
Early VR and AR headsets of the late 20th century, including the famed Sword of Damocles, had FOV ranges that are smaller than 30 degrees. This is due to the obvious fact that such primitive devices basically involved very simple computer programs and often only manipulating a stereoscopic display.
Currently, the headset with the highest field of view is Meta 2, set at a significant 90-degree rate, far higher than any of its competitors. Unfortunately, this also makes it the most cumbersome of the endgame developer headsets, as it works harder to render the images using the same level of processing hardware used by competing current generation AR headsets.
The (Gradual) Expansion of Your World
Of course, plans on escaping the 50-degree limit on a wide implementation scale are already set in motion. While companies understand that the FOV of current AR headsets is limited, the devices still enter into the market as early adoption versions of their own product line. In the meantime, research and development continues for future models with enhanced FOV, to keep the technology evolving while maintaining economic viability for the companies involved.
Though the simplest solution is to just wait for processors to become even faster and more efficient, or to develop new optics to render images more efficiently, there are a couple of companies that are just not willing to wait. These competing AR industry entities are already planning to use a few rather unique strategies to boost the field of view of their flagship product’s next iteration.
Here are two very promising examples:
- Microsoft revealed a new design patent for a HoloLens 2 device last year. It features a new binocular projection system, in which the image is split into two 35-degree glasses to achieve a total FOV output of around 70 degrees.
- Shanghai-based company Realmax stormed CES 2018 this year with the unveiling of their 100-degree FOV prototype AR headset. The company claims that it uses “proprietary optics”, or some form of waveguide technology that directs light beams onto the view lens in a specific manner, in order to achieve this astounding FOV rate.
As for other similar corporate endeavors, we can expect things to develop even further as the actual cost-efficient commercial versions of AR headsets the likes of HoloLens or Magic Leap One, finally arrive sometime in the near future.
Next stop: The Human Eye
So, will the field of view of AR headsets ever reach a level of complexity near to the human eye without being tethered to some high-end assembled PC? Technically yes. If there’s one thing that the Magic Leap One has proved, it’s that the main processing powerhouse doesn’t have to be worn along with the glasses. They can be shoved aside to some separate unit to crunch that necessary data.
Then again, there is the option of actually modifying our eyes to become the optical hardware itself. In that case, we probably won’t even need futuristic optics to finally have that default AR field of view we have always craved for.