When you’re on social media and see someone engaging in dangerous, illegal, or often just distasteful activity, it is often fairly easy to report the content or the poster. In the physical world, these situations are less frequent and scarier but here too there are ways to report illicit interactions when necessary. But, what about immersive experiences?
We tend to spend most of our time talking about the positive interactions and promises of immersive experiences. While AR and VR applications combine the best of in-person and online events, they also combine the worst of these settings for people who would misuse them. A recent report by the Information Technology & Innovation Foundation (ITIF) explores this.
Authors and Influences
“Content Moderation in Multi-User Immersive Experiences: AR/VR and the Future of Online Speech” is a free online publication by Daniel Castro, ITIF vice president and director of the Center for Data Innovation. Late last year, ITIF also partnered with the XR Association to host a remote conference on AR/VR policy.
Endnotes also credit Ellysse Dick for assisting with the report. Dick is an ITIF policy analyst who spoke at the policy conference last year.
“Even a few years ago, these seemed like very niche topics but the landscape is changing at an amazing pace,” Dick said at the conference. “The window of opportunity for impactful and effective policy is closing.”
The Problems With MUIEs
The main premise of the report was that providers and moderators of Multi-User Immersive Experiences (MUIEs) can learn from moderation in legacy forms of social media and interaction. However, immersive experiences have complexities not covered by existing approaches. Further, moderation must prevent illicit interactions without limiting productive interactions.
“MUIEs offer opportunities for community building, information sharing, and meaningful political speech and advocacy… Over-moderation in response to potential abuses could stifle these activities,” wrote Castro. “However, MUIEs require specific considerations for content moderation and online speech because they combine [social] elements in ways that other multi-user communications platforms do not.”
The report also pointed out a fact oft-repeated in conversations surrounding policy and moderation in immersive experiences: 2D social media hasn’t figured all of these things out yet either. Also important is that any policies that MUIEs do adopt should be flexible enough to apply to global users.
“Policymakers are already trying to catch up to rapid changes in communications technologies and two-dimensional media,” wrote Castro. “If left unaddressed, these gaps will only widen as new, immersive mediums are more widely adopted.”
What We Can and Can’t Learn From 2D Experiences
Elements of MUIEs that differentiate them from other online social experiences were identified including a greater variety of content times in immersive experiences, content accessible with different levels of immersion, and different intended and unintended use of platforms. The report made sure to point out that immersive environments themselves can pose their own challenges.
“Many of the considerations, challenges, and components from 2D platforms are also necessary components of MUIE content policies,” Castro wrote. “However, immersive experiences contain unique elements that present specific challenges for developing and enforcing content policies.”
The immersive nature of these experiences can make them feel more real for users with a sense of embodiment in a virtual world, but users engaging with the experience through less immersive technologies may experience situations differently. The non-verbal actions of avatars in immersive spaces as well as their interaction with virtual objects can also pose challenges.
“User interactions with one another and with digital elements in their environment are often synchronous and ephemeral, meaning much of this immersive content is generated in real time and is not easily retained for future review,” wrote Castro. “Further, objects themselves may be acceptable content, but the ways in which users interact with them may break platform rules.”
Complications can also arise when platforms pivot to serve new uses. Standards that may have been in place to serve one use case may need to be changed. For example, as the report was being written, Spatial pivoted from enterprise to art. While content moderation hasn’t been a huge problem since then, the platform is host to media that wouldn’t have been introduced there a year ago.
Immersive Experiences in Augmented Reality
Some of this conversation may be familiar to readers who follow XR ethics and policy. The report also devoted time to an underserved consideration: augmented reality.
MUIEs have existed in the VR and Web3D spaces for a while now but, until recently, there weren’t a lot of shared AR experiences. As spatial computing develops, this is changing rapidly.
“In AR, content moderation approaches have to consider not only the digital content that users generate, but also how that content augments the real-world locations, objects, or even people,” wrote Castro.
The report also raised concerns over the ownership of “digital layers on top of physical spaces.” As more platforms arise to sell this kind of “digital real estate,” owners of physical real estate have been scrambling to protect their physical environments and assets with organizations like ARIA Network and Darabase offering some solutions.
So far, incidences of bad actors in AR seem to be few and far between. However, experts in the policy space agree that it is best to address issues with immersive experiences sooner than later.
The report also touched on using a real person’s likeness to make an avatar of them without their knowledge or consent. This is another example of a sticky problem from the 2D web that can sometimes be solved through things like community reporting but can still do damage as it makes its rounds, particularly if a platform is not quick to remove it.
This issue is also an example of a situation in which it may be difficult to tell wrongdoing from “artistic representation.” There are examples not linked in this article of people creating 3D models of Facebook founder and CEO Mark Zuckerberg. One recent viral example featured an avatar of the controversial entrepreneur brandishing a knife over audio from Meta Connect.
Lessons and Recommendations
The report wrapped up with notes on how moderation is effectively being done and how it may be employed by those who offer and occupy immersive experiences. Recommendations included having guidelines similar to those that would be present at an in-person event, and making participants aware of the expectations upon their entry into the experience.
While the report explored ideas like allowing law enforcement into immersive experiences, it focused on building experiences in platforms that have well-outlined use terms and having individual experiences moderated by the hosts of those experiences rather than relying on platform staff. Above all, it emphasized giving power to individual users.
“One way to strike the right balance between safety and privacy is to implement user controls that allow individuals to shape experiences to meet their needs and expectations,” Castro wrote.
Final recommendations in the report were that immersive experience providers should focus on preventing real world harm, and that any frameworks should be “iterative with clear processes to update it” as new concerns arise.
Toward Greater Safety in XR and Beyond
This report is a welcome addition to a growing body of expertise regarding the approach to safety in immersive experiences. While many elements of these experiences are new and we should avoid shortcomings of past online experiences, there is much that we can learn from moderation approaches to other kinds of virtual communities.