Archives

Categories

Guest PostAugmented Reality

Opportunities and Pitfalls for Multi-User AR Experiences

Building robust multi-user AR will require considering potentially undesirable user interactions.

 

Big ideas in computing often become even bigger when they become interconnected and turn social. Servers became more powerful when linked together to form the Internet; smartphones became more enthralling when equipped with social networking apps; online gaming became more high-stakes when made into massively multiplayer experiences; the nature of work changed dramatically with the advent of direct messaging platforms, real-time cloud-based document editing, and videoconferencing. It is no surprise, then, that many compelling use cases of AR are multi-user.

Consider, for instance, collaborative AR tools that allow enterprise users to manipulate 3D structures, multiplayer AR games that center around real-world interaction, or AR retail experiences that enable shopping with friends. The appeal of such use cases has begun to attract attention from the AR community. For instance, Microsoft’s Azure Spatial Anchors are designed to persist and share objects in AR, and the startup Ubiquity6 is built around a vision of multi-user AR content.

See Also:  XR Technology for Collaboration Explained in New Free Guide

Besides imagining the benefits of shared AR experiences, though, it is also important to consider the harms — intentional or unintentional — that users may do to each other in such interactions. Only by including security and privacy in our design process can we build AR technologies that enable positive multi-user interactions while minimizing harmful ones.

What Could Go Wrong?

Negative multi-user interactions are not unique to AR. For instance, online communities such as Twitter have already needed to combat hateful language and manipulative content, and the VR community is grappling with player-to-player harassment in immersive games.

Multi-user AR will likely see a rise in abusive user behavior as well. What makes AR unique in the risks that it poses is the way in which virtual content and real-world occurrences interact. This combination amplifies the impact of existing forms of misbehavior and also introduces new concerns.

Imagine, for instance, that you are navigating a shopping center using the AR signage that shops have placed to promote their businesses, but someone has vandalized some nearby displays by placing interfering AR content in the same space. Or suppose you are on a team designing a new mechanical component, and a competitor sneaks a look at your model during a client meeting in a coffee shop. Or suppose you and a group of friends are constructing some playful AR content in a park when another user disassembles or deletes your construction.

More generally, we can think of these risks as falling into three categories: a malicious or careless user might:

  •     create or share unwanted or harmful content,
  •     see private content that they were not intended to access, or
  •     perform unwanted manipulations on other users’ content.

How, then, do we address these challenges?

Solving the Problem: Next Steps

The first step in protecting AR users from each other is to consider how the above types of unwanted behaviors might manifest in different app contexts. The semantics of an application can affect whether a type of interaction is good or bad.

For instance, in an app that lets users build structures out of virtual components that interact with the real world according to the laws of physics, a user should not be able to attach an unwanted or offensive accessory to a passerby; such an app may thus need to enforce a notion of personal space. An app in which users can play a virtual game of paintball, however, requires that virtual paint splatters stick to other players, so enforcing personal space makes less sense in that context.

Crucially, mitigating multi-user risks in AR apps need to not come at the expense of achieving rich functionality in shared experiences. For example, in an app where a developer wishes content to be shared publicly by default, a sensitive or personal object can be protected with a degree of privacy by making it appear to unauthorized parties as a neutral placeholder object rather than showing the object publicly in its original form.

See Also:  Should We Be Concerned About the Security and Privacy Risks of VR and AR?

The ideas in this article have been explored by the academic research community — for example, through the ShareAR, EMMIE, and SecSpace projects. However, commercial AR development platforms have not yet built such controls into their APIs, so the responsibility for mediating multi-user AR interactions largely remains with app developers.

To fulfill the promise of multi-user AR, app designers must pay attention to both good and bad potential user behavior, and platform and toolkit designers must provide more robust support to developers in addressing these issues in their designs.

Guest Post

About the Guest Author(s)

Kimberly Ruth
Kimberly Ruth
Research Assistant | University of Washington Security and Privacy Research Lab | + posts

Kimberly Ruth has been researching AR security and privacy for the past four years as part of the Security and Privacy Research Lab at the University of Washington. She and the lab’s co-directors, Professors Tadayoshi Kohno and Franziska Roesner, developed the ShareAR toolkit (arsharingtoolkit.com) to help developers build robust multi-user AR interactions. More on the lab’s work in AR security can be found at ar-sec.cs.washington.edu.