Author Laura Schatzkin is a web developer at the Electronic Frontier Foundation with over twenty years of tech experience, mostly working with non-profits. In 2018, she was selected to participate in the Mozilla XRStudio Residency program, where she coded EFF’s first virtual reality project, Spot the Surveillance; and she has worked on many of EFF’s digital activism projects, including Surveillance Self-Defense, Security Education Companion, StartTLS Everywhere, and Fix It Already. She has a parallel life as a visual artist.
Everyone sees that our world is changing rapidly, and not necessarily for the good. Unlike the promotional films of the 1960s and ‘70s, where the future is seen as a clean, utopian world of helpful gadgetry, every new device and app seems to come at a cost these days: incessant multitasking, loss of privacy and, consequently, loss of personal freedom.
Many of us are aware of this phenomenon; we often throw up our hands and take the stance of “that’s just the way it is now” or “I have nothing to hide anyway,” because we feel disempowered to do anything about it.
It was in response to these perceptions that I recently approached creating a virtual reality experience for the organization I work for, Electronic Frontier Foundation. Since 1990, EFF has been working on the issues of free expression, privacy, and innovation. Just as email was the digital frontier almost 30 years ago, virtual reality (VR) and related technologies are the frontier now.
Some of us are old enough to remember the VR arcade games of the 1990s. They were fun, but unwieldy and expensive to play. I played plenty of Dactyl Nightmare at $5 for three minutes, shooting pterodactyls from the sky. The headset was enormous, and there was a banister provided so one would not topple over from the weight.
Having had my first taste of contemporary VR in the fall of 2017, I was immediately hooked. My coworker Dave Maass had set up a VR station at EFF so that we could all experience it. I put on the headset and loaded the Google Earth VR app. I went to the top of the Eiffel Tower, walked through the Taj Mahal, and spent a long time floating above the Earth looking at the Milky Way. I didn’t want to take the headset off. Ever.
But I did, and went back to my desk and my work as a web developer. It wasn’t long before I started to wonder, how was this technology created? Could I create it? Are there possible problems with such a technology? The answers to the latter two questions were yes, and the answer to the first question was yet to be discovered.
One of the areas we focus on at EFF is what we call “street-level surveillance.” This includes technologies that law enforcement uses, from ALPR to tattoo recognition. What many people don’t realize is that, while ostensibly collecting data to fight crime, these devices also gather data on all citizens, store it, and share it far and wide. Anyone, not just criminals, can be pulled into the dragnet.
Along with like-minded coworkers who work to shed light on these technologies, Dave, Soraya Okuda, and I began to think of ways a VR tool might help teach the public about this issue.
That was one part of the project. The other part was discovering what could be invasive about VR, what personal information could be collected from someone in a VR experience. With this two-fold approach, we went through the process of creating an educational tool while educating ourselves.
We called the project Spot the Surveillance. My job was to create an easy-to-use experience within a non-profit budget.
We wanted to make the experience easily accessible in two senses of the word. One, that anyone could access it from an array of devices. Two, that people with all physical abilities should be able to participate in the experience.
With these parameters in mind, we chose to create a web browser-based experience using Mozilla’s A-frame. A-frame is a framework built with HTML5, three.js, and some new code to allow developers to create an interactive 3D environment. It enabled us to design the entire experience on a web page with a minimum of code and overhead. I had been coding HTML for ages, but I had no concept of how to code VR. I was working within my field, yet stretching outside it.
Spot the Surveillance places the user on a street corner in San Francisco, using a 360-degree photo. We had spent some time scoping out possible locations and decided on this particular corner, as it had a police station and multiple surveillance devices. We had a stroke of luck while taking the photo when a couple of police officers came out with a citizen.
In the scene, we see this citizen having this encounter with the police. Less obviously, there are various surveillance devices for the user to find. The user turns their head to focus on items in the scene. If they correctly focus on one of the devices, an information card appears and is read aloud about what the device is and how it is used. A tally is kept of how many devices the user finds, and when all are found there’s an invitation to read more about this issue on EFF’s website.
We chose to incorporate user-centered design, meaning we had people testing the scene as we developed it. We didn’t rely on our assumptions. We initially had coworkers try out the experience as we were developing it to check our ideas about the ease of the interactions. Then we sought out testers outside our offices to confirm that people could engage and control the experience as desired, whatever their abilities.
One thing we learned in creating the experience was that, because of the immersive nature of VR, we perceive what we are experiencing as real even if we know it is not. Because of this, users can often become overwhelmed if there is an intimidating element to the scene. With this information in mind, we sought to make the experience as neutral as possible. We used a pleasant sound, paired with a visual cue, to indicate that the user had spotted the device. We included two false devices that have mildly humorous scripts when they are revealed. When all devices are found, there is a cheerful “Congratulations!”
We used a 360-degree photograph that we took ourselves, which means the experience uses real-life imagery rather than an artist-constructed simulation. This added to the “realness” of the scene for the user. Despite the light-hearted elements mentioned above, some people are still intimidated to see police standing so close to them, and some people are disturbed by the information they learn, but for the most part when they remove the headset they are relaxed and engaged.
Another thing we learned was that there is a whole sector of VR using biometric feedback. By incorporating machine learning, creators of VR can use pupil dilation tracking and other forms of biosensing to interpret the emotional response users have to what they are viewing. This is very personal information by its very nature, and there is not yet information available about how or if this information is collected or stored.
This is the part of this frontier that we at EFF wanted to get inside of, to explore, and to have on our radar as a potential problem to keep monitoring. It’s also a part we definitely did not want to participate in. A challenge was to create a functional experience that is responsive to physical movements and choices, but does not record them.
Since we launched Spot the Surveillance last year, we have taken it to a dozen conferences, and we estimate around 600 people have tried it in person. These conferences have been across the US, Mexico, and Brazil. Because the experience can be viewed in a browser, people can also try it from their own computer or smartphone, or their own VR headset if they have one. Our statistics show that over 4500 people have accessed it, many internationally. Some people have told us that it has made them look for surveillance in their daily lives. For some people, it is their very first encounter with VR. From the feedback we’ve received, either in person or through an online survey, we feel that the educational value is high.
The code for the project is open to everyone. My dream is that other mission-driven organizations will use our code as a base for their own educational projects, since many see VR as an educational opportunity, instead of as the purview of games and entertainment. A search on the terms “VR education” yields a multitude of articles on the subject. The organization Public VR focuses on supporting and developing non-profits who wish to explore this technology.
Whereas VR puts a person in a simulated reality, AR overlays objects and interactions in the real world. This can be as simple as displaying images on a smartphone’s camera display. A smartphone’s GPS can be used to make the experience location specific. The most well known version of this is the Pokémon Go game, which was so popular a few years back.
Since smartphone use is so universal, AR (augmented reality) already surpasses VR in its distribution, and it’s likely that AR will become widespread in a short time. Scrolling through the Wikipedia page shows that there are already more than a dozen fields beginning to implement AR, including architecture. Teaching opportunities are possible in both VR and AR, so it’s up to the creators of each experience to decide which technology best suits their teaching concepts. I’m hoping that mission-driven organizations will utilize these opportunities as soon as resources allow.
Just as no one could have predicted a future where people would rather text than talk, or that rideshare would emerge so quickly to replace taxis, we really don’t know how these new technologies will develop. It does seem likely that the age of websites as the primary conveyor of digital information is coming to an end, and that experience- and location- specific information will supersede. We can extrapolate, based on the current rate of technological advancements, that these new technologies will develop quickly. This is our current frontier, exciting and frightening, educational and distracting, and still very new.
Thank you to Soraya Okuda, Michelle Chang, Lisa Berman, Jason Kelley, Christopher Robert Van Wiemeersch, Casey Yee, Mozilla and the A-Frame community for their work and help on this project.