Privacy Concerns and Data Harms in the Metaverse

By Marcus Carter and Ben Egliston

Facebook chief executive Mark Zuckerberg has recently announced the company will change its name to Meta, saying the move reflects the fact the company is now much broader than just the social media platform (which will still be called Facebook). The rebrand follows several months of intensifying discourse by Zuckerberg and the company more broadly on the metaverse – the idea of integrating real and digital worlds ever more seamlessly, using technologies such as virtual reality (VR) and augmented reality (AR).

Zuckerberg sees VR as a pathway to a new kind of “social computing platform” using the enhanced feeling of “presence” that VR affords. For Facebook, the introduction of VR-based computing will be like the leap from text-based command line interfaces to the graphical user interfaces we use today. This may well be right. VR affords a strong feeling of embodied presence that offers new possibilities for entertainment, training, learning and connecting with others at a distance.

But if the metaverse that Facebook is building functions via the company’s existing social computing platform and business model of extracting data to deliver targeted advertisements, the entire future of the internet is at stake. 

Facebook’s journey into the metaverse

The Meta rebrand is the culmination of seven years of corporate acquisitions, investments and research that kicked off with Facebook’s acquisition of VR headset company Oculus for US$2 billion in 2014. Oculus had risen to prominence with a lucrative Kickstarter campaign, and many of its backers were angry that their support for the “future of gaming” had been co-opted by Silicon Valley.

While gamers fretted that Facebook would give them VR versions of Farmville rather than the hardcore content they envisioned, cynics viewed the purchase as part of a spending spree after Facebook’s IPO, or simply Zuckerberg indulging a personal interest in gaming. Oculus has gone on to dominate the VR market with over 60% market share. That’s thanks to heavy cross-subsidisation from Facebook’s advertising business and a console-like approach with the mobile “Quest” VR headset.

Beyond Oculus, Facebook has invested heavily in VR and AR. Organised under the umbrella of Facebook Reality Labs, there are nearly 10,000 people working on these technologies – almost 20% of Facebook’s workforce. Facebook also recently announced plans to hire another 10,000 developers in the European Union to work on its metaverse computing platform.

While much of its work remains behind closed doors, Facebook Reality Labs’ publicised projects include Project Aria, which seeks to create live 3D maps of public spaces, and the recently released Ray-Ban Stories – Facebook-integrated sunglasses with 5-megapixel cameras and voice control.

All these investments and projects are steps towards the infrastructure for Zuckerbeg’s vision of the metaverse. As he said earlier in the year: “I think it really makes sense for us to invest deeply to help shape what I think is going to be the next major computing platform”

Exactly what this “next computing platform” is remains to be seen. The name ‘metaverse’ is a reference to the gaming-like virtual worlds from science fiction, where we live our lives in tightly controlled virtual environments. But the technologies and investments that Facebook are making out speaks to a technology much more tightly integrated with the real, physical world too. 

Why does Facebook want to rule the metaverse?

The metaverse may eventually come to define how we work, learn and socialise. This means VR and AR would move beyond their current niche uses and become everyday technologies on which we will all depend, much like the smartphone.. We can guess at Facebook’s vision for the metaverse by looking to its existing approach to social media. It has moulded our online lives into a gigantic revenue stream based on power, control and surveillance, fuelled by our data.

VR and AR headsets collect enormous amounts of data about the user and their environment. This is one of the key ethical issues around these emerging technologies, and presumably one of the chief attractions for Facebook in owning and developing them.

As American VR researcher Jeremy Bailenson has written:

…commercial VR systems typically track body movements 90 times per second to display the scene appropriately, and high-end systems record 18 types of movements across the head and hands. Consequently, spending 20 minutes in a VR simulation leaves just under 2 million unique recordings of body language.

What makes this particularly concerning is that the way you move your body is so unique that VR data can be used to identify you, rather like a fingerprint. That means everything you do in VR could potentially be traced back to your individual identity. For Facebook – a digital advertising empire built on tracking our data – it’s a tantalising prospect.

Facebook’s Oculus Quest headsets also use outward-facing cameras to track and map their surroundings. In late 2019 Facebook said they “don’t collect and store images or 3D maps of your environment on our servers today”. Note the word today, which tech journalist Ben Lang notes makes clear the company is not ruling out anything in the future.

Responsible Innovation?

Alongside Project Aria, Facebook launched its Responsible Innovation Principles, and recently pledged US$50 million to “build the metaverse responsibly”. But, as Catherine D’Ignazio and Lauren Klein note in their book Data Feminism, responsible innovation is often focused on individualised concepts of harm, rather than addressing the structural power imbalances baked into technologies such as social media.

In our studies of Facebook’s Oculus Imaginary (Facebook’s vision for how it will use Oculus technology) and its changes over time to Oculus’ privacy and data policies, we suggest Facebook publicly frames privacy in VR as a question of individual privacy (over which users can have control) versus surveillance and data harvesting (over which we don’t). Framing questions about VR and AR surveillance in terms of individual privacy suits companies like Facebook very well. That’s because their previous failings are actually in the (un)ethical use of data (as in the case of Cambridge Analytica) and their asymmetric platform power.

Critics have derided Facebook’s announcements as “privacy theatre” and corporate spin. Digital rights advocacy group Access Now, which participated in a Facebook AR privacy “design jam” in 2020 and urged Facebook to prioritise alerting bystanders they were being recorded by Ray-Ban Stories, says its recommendation was ignored.

Ray-Ban Stories features a small light on the side of the frame, which is illuminated when recording. But it can easily be covered over, and while this would violate Facebook’s terms of service, it’s hard to see how Facebook would realistically stop anyone doing it. As Daniel Leufer at AccessNow writes, “There are many better ways they could have made it clear recording is underway than a tiny white light. Why not a red light, which is typically associated with recording? Why not add a loud beep before recording starts? Or give them a unique design to distinguish them from normal Ray-Bans?”

In releasing their smartglasses product in partnership with Ray-Bans, as a pair fo glasses that are – from more than a few metres away at least – indisgintuishable from a normal pair of Ray-Ban smart-glasses, Facebook are exploiting an existing technology (sunglasses) to normalise wearable surveillance technology, about which people currently have deep and understandable reservations. If video Ray-Bans become mainstream, who knows what other data-intensive gadgets are lurking just around the corner?

The Metaverse doesn’t have to be Dystopian

Appropriately enough, the metaverse under Facebook is likely to resemble the term’s literary origins, coined in Neal Stephenson’s 1992 novel Snow Crash to describe an exploitative, corporatised, hierarchical virtual space.

But it doesn’t have to be this way. Tony Parisi, one of the early pioneers of VR, argues we already have a blueprint for a non-dystopian metaverse. He says we should look back to the original, pre-corporatised vision of the internet, which embodied “an open, collaborative and consensus-driven way to develop technologies and tools”.

Facebook’s rebrand, its dominance in the VR market, its seeming desire to hire every VR and AR developer in Europe, and its dozens of corporate acquisitions – all this sounds less like true collaboration and consensus, and more like an attempt to control the next frontier of computing. A 2018 internal document, recently revealed as part of the ‘Facebook Papers’, plainly lays out this ambition for control.

Many emerging technologies encounter what is known as the Collingridge problem: it is hard to predict the various impacts of a technology until it is extensively developed and widely used, but by then it is almost impossible to control or change.

We see this playing out right now, in efforts to regulate Google and Facebook’s power over news media. As David Watts argues, big tech designs its own rules of ethics to avoid scrutiny and accountability: Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech … The harms linked to big tech can only be addressed by proper regulation.

What might regulation of Facebook’s VR look like? Firstly, we immediately need stronger baseline protections for any data captured through devices like VR or AR headsets and glasses, recognising that this data can be behavioural biometric and re-identifiable. This will include much more improved measures for attaining user consent (in contrast to the currently vague license agreement that Oculus employs). 

Secondly, and perhaps more radically, we also call for a moratorium on the processing of headset data beyond that which is required for the headset to operate. As we have seen with Facebook’s other technologies, and the numerous AI related cases of algorithmic discrimination and harm, reining in technologies once widespread can be an insurmountable challenge. Our research has already identified the potentially discriminatory usages of VR data analytics in workplaces, and there is a significant potential for harm in the use of VR data for targeted digital advertisements. 

However, regulation isn’t necessarily a silver bullet. Facebook themselves, for instance, have recently been pre-emptively pushing for the regulation of the metaverse. As Andrew Bosworth, Facebook’s now-CTO and Nick Clegg, Facebook’s VP of global affairs and communications (and Cameron ministry politician) note, it will be necessary that there are sufficient regulatory mechanisms to ensure privacy and safety in the metaverse. Their attempts here are to get ahead of the wave of inevitable criticism if/when the metaverse materialises, set the agenda, and look good while doing it.

We let Facebook rule the world of social media. We shouldn’t let it rule the metaverse.

About the Authors

marcus carterDr Marcus Carter is a Senior Lecturer in Digital Cultures at The University of Sydney and director of the Sydney Games and Play Lab. With a background in Game Studies and Human-Computer Interaction research, his research is concerned with the social experience and impacts of games and emerging mixed reality technologies.

Dr Ben Egliston is a postdoctoral research fellow in the Digital Media Research Centre, at Queensland University of Technology. He researches the practices and politics of digital technology, currently focusing on videogames and mixed reality.

The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of The World Financial Review.