Last week, Amazon announced it was integrating AI into a number of products—including smart glasses, smart home systems, and its voice assistant, Alexa—that help users navigate the world. This week, Meta will unveil its latest AI and extended reality (XR) features, and next week Google will reveal its next line of Pixel phones equipped with Google AI. If you thought AI was already “revolutionary,” just wait until it’s part of the increasingly immersive responsive, personal devices that power our lives.

AI is already hastening technology’s trend toward greater immersion, blurring the boundaries between the physical and digital worlds and allowing users to easily create their own content. When combined with technologies like augmented or virtual reality, it will open up a world of creative possibilities, but also raise new issues related to privacy, manipulation, and safety. In immersive spaces, our bodies often forget that the content we’re interacting with is virtual, not physical. This is great for treating pain and training employees. However, it also means that VR harassment and assault can feel real, and that disinformation and manipulation campaigns are more effective.

Generative AI could worsen manipulation in immersive environments, creating endless streams of interactive media personalized to be as persuasive, or deceptive, as possible. To prevent this, regulators must avoid the mistakes they’ve made in the past and act now to ensure that there are appropriate rules of the road for its development and use. Without adequate privacy protections, integrating AI into immersive environments could amplify the threats posed by these emerging technologies.

Take misinformation. With all the intimate data generated in immersive environments, actors motivated to manipulate people could hypercharge their use of AI to create influence campaigns tailored to each individual. One study by pioneering VR researcher Jeremy Bailenson shows that by subtly editing photos of political candidates’ faces to appear more like a given voter, it’s possible to make that person more likely to vote for the candidate. The threat of manipulation is exacerbated in immersive environments, which often collect body-based data such as head and hand motion. That information can potentially reveal sensitive details like a user’s demographics, habits, and health, which lead to detailed profiles being made of users’ interests, personality, and characteristics. Imagine a chatbot in VR that analyzes data about your online habits and the content your eyes linger on to determine the most convincing way to sell you on a product, politician, or idea, all in real-time.

AI-driven manipulation in immersive environments will empower nefarious actors to conduct influence campaigns at scale, personalized to each user. We’re already familiar with deepfakes that spread disinformation and fuel harassment, and microtargeting that drives users toward addictive behaviors and radicalization. The additional element of immersion makes it even easier to manipulate people.

To mitigate the risks associated with AI in immersive technologies and provide individuals with a safe environment to adopt them, clear and meaningful privacy and ethical safeguards are necessary. Policymakers should pass strong privacy laws that safeguard users’ data, prevent unanticipated uses of this data, and give users more control over what is collected and why. In the meantime, with no comprehensive federal privacy law in place, regulatory agencies like the US Federal Trade Commission (FTC) should use their consumer protection authority to guide companies on what kinds of practices are “unfair and deceptive” in immersive spaces, particularly when AI is involved. Until more formal regulations are introduced, companies should collaborate with experts to develop best practices for handling user data, govern advertising on their platforms, and design AI-generated immersive experiences to minimize the threat of manipulation.

As we wait for policymakers to catch up, it is critical for people to become educated on how these technologies work, the data they collect, how that data is used, and what harm they may cause individuals and society. AI-enabled immersive technologies are increasingly becoming part of our everyday lives, and are changing how we interact with others and the world around us. People need to be empowered to make these tools work best for them—and not the other way around.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.

Share.
Exit mobile version