
The Novelty of Virtual Reality Has Worn Off. So What Does Its Future Look Like?
Like self-sustaining fusion power and full self-driving cars, in the last decade virtual reality (VR) has joined the ranks of technology not quite living up to its promise. While VR headsets, games, and even workplace applications have grown in recent years, widespread adoption of this technology at the rate of something akin to smartphones is still a far cry away.
Yuhang Zhao is an assistant professor of human-computer interaction at the University of Wisconsin-Madison, whose research focuses on accessibility for augmented reality (AR) and VR technology. She tells Popular Mechanics that part of this delay in adoption can be chalked up to pricing—with even cheap models costing hundreds of dollars—and its male-centered hardware design.
“[The headsets are] heavy, bulky, and not comfortable to wear,” Zhao says. “The form factor design did not consider the needs of diverse users, such as female users… people with disabilities… [or even] people who wear glasses.”
But there may also be another reason behind VR’s lukewarm reception: VR hardware simply isn’t that good yet. At least, that’s the perspective of Won-Jae Joo and Mark Brongersma, VR experts from the Samsung Advanced Institute of Technology and Stanford University, who recently published an essay in Science exploring the ways in which the technical specs of VR headsets were letting the technology down.
“A larger-scale adoption of the technology by the general public will require the headsets to be smaller, lighter, and cheaper and to have more data-processing power,” Joo and Brongersma write in their essay. “The competing demand for their displays to be smaller and to have higher resolution is particularly challenging because of the inherent trade-off between the two, [but] recent advances on the display technology for VR may help to achieve that goal.”
One important aspect that will need to improve for VR headsets is the pixel density of the displays themselves. This is tricky not only because there are physical limits to how small you can build light-emitting elements, like LCD screens and OLEDS, but also because the viewer’s distance from the screen is much smaller than it is between a phone or television.
“A person with 20/20 vision can distinguish about 60 pixels per degree near the center of their field of vision,” Joo and Brongersma write.
“To put this in context, for a 75-inch ultrahigh-definition TV with 8000 pixels across, the resolution as seen by a viewer from 10 feet away is greater than 200 pixels per degree. However, because of the small distance between the display and the user’s eyes for a VR headset today, the best resolution for that experience is only about 15 pixels per degree.”
To achieve a resolution on par with human vision, the authors write that VR headsets and glasses will need something like 7,100 to 10,000 pixels per square inch of display. For comparison, last year’s iPhone 13 only had under 500 pixels per square inch.
Currently, there’s no one clear path to achieving this level of density. However, meta-OLEDS—a new kind of engineered organic light-emitting diode that can efficiently harness different frequencies of light—may be one potential solution, the authors write.
Another avenue to improve a user’s experience without necessarily achieving peak pixel performance is to incorporate more human behavior into the hardware, Zhao says. Namely, sensors that can track the location of a user’s eyes.
“With eye trackers, the system can focus on rendering elements that the user is looking at, refining people’s VR experience and reducing the computational power used to render the whole scene,” Zhao says.
In other words, VR systems of the near-future may focus their high-resolution pixels on parts of an image a user is actually looking at (e.g., a character they’re talking to) while allowing the background landscape to blur. This is similar to how our own vision works and could help reduce the virtual motion sickness (also known as cybersickness) that some users experience when using VR, Joo and Brongersma write.
“I think this is a trend and eventually most AR/VR devices will involve eye trackers,” Zhao says, adding that eye trackers could also be used for user authentication. Such customization may also help address cybersickness disparities observed between male and female users, which have been linked to headsets designed for only the average male interpupillary distance.
However, these upgrades are unlikely to come with a smaller price tag, Zhao says—at least, not at first. As with all technology, the question of pricing may come down to larger manufacturing improvements as well as increased interest by the general public.
One way to potentially improve future adoption, Zhao says, is to put more consideration into making virtual environments accessible to users with visual or motor function impairments.
“To make VR a broadly used platform, it is very important to make it usable by all kinds of users,” she says. “For example, computer and smartphone systems all support screen readers for blind users, but none of the VR systems is compatible with screen readers.”
Using auditory cues to better integrate visually impaired users into a virtual environment is part of the work that Zhao and her students are doing to address these inequities. Games like Last of Us Part II have also begun to incorporate accessibility into their designs as well, she says.
Where this technology will be in another decade is uncertain, but VR’s biggest advocates are unlikely to give up on it anytime soon.
Sarah is a science and technology journalist based in Boston interested in how innovation and research intersect with our daily lives. She has written for a number of national publications and covers innovation news at Inverse.
Source: https://www.popularmechanics.com/technology/a43327959/the-future-of-virtual-reality/