Stanford Journalism Program’s Guide to Using Virtual Reality for Storytelling 

Insights
Share on FacebookShare on Google+Tweet about this on TwitterPin on PinterestShare on LinkedInEmail this to someone

Given the explosion of interest in virtual reality among media organizations, we sought in January to establish best practices and ideal scenarios for using the technology in storytelling through our inaugural immersive journalism class at Stanford University.

During the 10-week course, 12 undergraduate and graduate students evaluated a range of virtual reality experiences published by the New York Times, Wall Street Journal, ABC News and others. We compared commercially available virtual reality headsets (Google Cardboard, HTC Vive, Samsung Gear/VR and Oculus Rift) for ease and quality as well as virtual reality cameras — the (more expensive but expansive) GoPro and the (more affordable) Ricoh Theta S.

While students learned about 3D modeling and computer-generated avatars and studied the groundbreaking work of Nonny de la Peña in this area, their own hands-on projects were with 360-degree video — the preferred medium of the moment for news organizations. Computer-generated avatars have the advantage of allowing the viewer to walk around in a recreated scene, but that experience is more expensive, time-consuming and technically difficult to build.

While we were interested in students familiarizing themselves with — and experimenting with — the technology, we primarily wanted to answer the fundamental question that journalists thinking about the medium ought to be asking themselves: Why would you use virtual reality in your storytelling in the first place? Below, are some of our key conclusions on this point. The answer is not as often as you might think.

Immersive journalism is a quickly developing field in its early stages. Some of the preliminary suggestions and observations in this memo involve the storytelling side. We found that trying to apply traditional journalism video techniques to virtual reality is problematic both ethically and narratively and the medium may require a brand new set of guidelines.

Also here are technical tips for reporters who are new to spherical-video cameras and stitching scenes. We also raise ethical questions that we don’t have answers to yet but which we believe journalism practitioners and thinkers need to deeply consider as we integrate this new medium into our journalism toolkit.

Immersive journalism’s potential

VR has the potential to enhance storytelling by offering the audience experiences and environments that are logistically out of reach for most of us: life in a refugee camp, exploring a distant planet, scrubbing up to enter an Ebola ward, walking with bison on the Great Plains, teetering atop the spire on the World Trade Center. The Knight Foundation and Columbia’s Tow Center for Digital Journalism recently issued in-depth reports on virtual reality journalism.

Even with these stunning examples, the reality is that the vast majority of news stories are not suited for VR. And for at least the foreseeable future, we believe the majority of VR pieces will complement other forms of reporting rather than replace them. But the potential for impacting audiences in ways never possible with other journalism mediums is enticing.

When to use virtual reality in journalism

Virtual reality — and for the purposes of this essay we mean spherical video — actually has a very limited list of “home-run” applications in journalism as our key collaborator in this experimental class, Jeremy Bailenson, founder of Stanford’s Virtual Human Interaction Lab, likes to say. The vast majority of news stories are not suited for VR. It should be used to tell stories that are suited to its key strength — putting a viewer in the scene.

Journalists, we argue, should only consider using VR in their storytelling for:

  1. Places that are hard to get to or where people are unlikely to go.
  2. Where being in the actual space deepens one’s understanding of a story beyond a written narrative, photos or regular video.
  3. And most crucially, following on the previous point: where turning your head side-to-side is essential. If all the action is front and center — say at a political debate — you don’t need spherical video.

Virtual reality and narrative — rules for the road, questions and concerns

  • Generally, pieces that have been published thus far have been too long. Viewers get fatigued in the headset and bored. They should last no longer than four or five minutes. Shorter, punchier experiences are better.
  • Virtual reality does not always work as a stand-alone journalism project. Better to think of it as an add-on that brings added value to other forms of reporting.
  • Full narratives that try to replicate traditional video documentaries leave viewers wondering where they should look and as a result the viewer can easily miss the most compelling action in the scene.
  • The advantage of shooting current events in virtual reality is that it allows viewers to rediscover new elements each time they watch it that they may have missed before. One potential future area could be to capture historical events so that we can do better analysis down the road.

Adapting techniques

Few of the rules of journalism or filmmaking apply to VR storytelling.

For instance, the venerated “Rule of Six” by three-time Oscar-winning film editor Walter Murch has to be stretched to be applied to the free-range audience in spherical video. “Your job is partly to anticipate, partly to control the thought processes of the audience.”

At the same time, Murch’s focus on sound rings more true because sound has a greater potential role in the VR medium. Filmmakers such as Google VR’s Jessica Brillhart are starting to codify editing practices for cinematic VR.

Still, many questions remain. How do you balance the needs of journalists to create a narrative and at the same time allow viewers to explore the VR space? Framing is no longer the most relevant part of filmmaking in this medium. Instead, it is placement of the camera rig. Early VR developers believed the camera could not be moved without causing the viewer to be distressed. But the popularity of GoPro’s 360 channel and app are calling that into question.

Cardboard viewers allowed students to use their smartphones to view VR journalism pieces.
Cardboard viewers allowed students to use their smartphones to view VR journalism pieces.

Technological hurdles and barriers

360-degree viewing with or without a headset

While there is massive investment in VR technology, the use of the medium in journalism has been limited so far, with a few exceptions, to larger media organizations that have built or purchased their own platforms.

Facebook 360 and YouTube 360 have excellent mobile experiences, allowing immersive viewing without the use of “cardboard.” These platforms garner millions of views and a much larger audience than for platforms requiring any type of headset. And inexpensive 360 cameras and streaming devices are making it much cheaper and easier to create VR content.

Camera challenges

And yet there are significant barriers to producing all but the lowest-quality spherical video. The Ricoh Theta S is an elegant solution, with plug-and-play software, but the resolution is not high enough to compare with expectations set with the HD video experience. The low end of the spectrum for quality content is gaining audience on Facebook and YouTube, but it is a leap to generating profit as the New York Times has done with its VR app.

Cameras and software inevitably will get better, cheaper and faster. But journalists need a few more advancements to make the process for spherical capture more bearable. With current technology, it is enormously frustrating figuring out how to place cameras in the right spot in the right moment without getting caught in the frame. And before you even reach this stage, there’s this complicated set-up and workflow:

  • Every camera, media card and file must be numbered in sequence to facilitate troubleshooting.
  • Each of the multiple cameras you are using must have identical settings, and the chance to accidentally change a setting increases with handling.
  • Battery life is a limitation (you need external battery packs, special cables and spare batteries to extend the shoot).
  • The remote control for multiple GoPros is unreliable and Wi-Fi kills battery life (see point above).
  • For each clip you shoot, it’s best to perform both audio and motion synchronization at the start and end to give more options for finding a good stitch point (see below for stitch issues).

After you successfully get your multi-cam rig running, journalists usually leave the very scene they are hoping to document so as not to be a confusing character in the scene. This leaves the journalist blindfolded, without any idea of what is being captured, until the footage is stitched and reviewed in post-production. For high-end cinematic VR filmmakers, live monitoring of the 360 shot and motorized camera movements are available at a steep price. The ability to see what is being recorded and move the camera accordingly is beyond the reach of journalists. A low-cost live monitoring system is needed, along with a robotic camera stand to move the camera array to proper position without having the videographer appear in the shot each time. Journalists also face security issues, having to leave the camera array unattended, with it necessary to be out of the line of sight of the rig.

The computing power required to stitch means the shots can only be reviewed after hours of processing, usually at the end of the day. And if you missed it, as Joni Mitchell sings in “Big Yellow Taxi”: “You don’t know what you’ve got till it’s gone.”

Stitching — the post-production pain

The post-production step for stitching is incredibly time-consuming, laborious and rife with computer crashes. The slowest step in the spherical-video workflow is stitching together the files from multiple camera arrays, typically 6 to 16 GoPros.

There are software solutions (Kolor and VideoStitch) that automate the process, but a few problems remain to be solved:

  • Stitching software is slow and crashes often — using fewer cameras will speed up the stitch.
  • Fewer cameras mean wider lenses must be employed, which create more distortion, especially in objects close to the cameras.
  • Vertical alignment of cameras creates shorter stitch seams, which are easier to correct.

As the cost of technology decreases and expertise increases, we will see the quality of content take a leap forward. Only time and investment will tell how quickly journalism advances in this new medium. The leaders in the space are doing phenomenal, pioneering work now.

Master’s student Naomi Cornman (Stanford Media Studies ’16) worked with classmate Anna Yelizarova on a series of 360-video stories about sports that published on Peninsula Press, the Stanford Journalism Program’s local news website.
Master’s student Naomi Cornman (Stanford Media Studies ’16) worked with classmate Anna Yelizarova on a series of 360-video stories about sports that published on Peninsula Press, the Stanford Journalism Program’s local news website.

Ethical dilemmas

A key issue that came up during our course was transparency, or lack thereof, in some of the pieces. VR journalists, we believe, need to figure out a way to be clear and honest about narration, voiceovers and other editorial judgments that were made to enhance storytelling but which may manipulate the viewer. To do this, we need to define what virtual reality journalism is. Is it a filmmaker’s personal vision? Or is it traditional news with its traditional ethics standards of balance and fairness?

Beyond deepening one’s understanding of an issue (climate change or solitary confinement for example), virtual reality is a powerful empathy-generating tool, as research by Stanford’s Virtual Human Interaction Lab has found. And therefore, using VR in storytelling could increase audience empathy from immersion in the story far more than just seeing a photo or regular video.

How do we as journalists reconcile this with objectivity and other traditional standards? Will it make viewers feel like there is an agenda? Is that OK?

In traditional video journalism, the presence of a camera has always influenced the reality in front of it. The result can be authentic or staged and the same goes for VR. The difference now is that the reality surrounds the cameras and the journalist has two choices: leave that reality or become a part of the story, active or passive.

Should the video journalist be in the shot? Should the journalist appear and narrate? Since the feeling of presence is so important for the viewer to explore the story, having a reporter in the space might conflict and intrude with the feeling of independent discovery. A heavy narration can distract from the feeling of presence and the immersion that the reporter is trying to create.

Which leads us to the question: If journalists build VR, will the audience come? Will VR stick or will it go the way of 3D TV? They share some of the same issues — expensive technology, goofy, uncomfortable, wrap-around glasses. But there is too much investment and development going on in the space — it is already too big to fail. How is VR different than 3D TV? There was never a clear content reason to invest in 3D TV. No “killer” experience emerged, so content producers never reached the critical mass of the market. With the exciting content that has been produced so far, the trajectory for quality content in the VR space already has a foundation. Once you experience a VR “ah-ha” moment, you can’t wait to find the next one.

Geri Migielicz is the Lorry I. Lokey Visiting Professor in Professional Journalism, teaching multimedia in the Stanford Journalism Program in the Department of Communication. She was Director of Photography at the San Jose Mercury News from 1993 to 2009. Geri was executive producer of a 2007 national News and Documentary Emmy Award-winning web documentary,Uprooted, for mercurynews.com. She was on the leadership team for the coverage of the Loma Prieta earthquake that won a 1990 Pulitzer Prize in general news reporting. She also edited the paper’s coverage of California’s recall election, a 2003 Pulitzer finalist in Feature Photography. Geri is co-founder and executive editor of Story4, a multimedia production studio whose current project is a feature documentary, The Cannon and The Flower.

Janine Zacharia is the Carlos Kelly McClatchy Lecturer in Stanford’s Department of Communication, where she teaches reporting and writing classes on public issues and foreign affairs as part of the Stanford Journalism Program. Before coming to Stanford, she reported on the Middle East and U.S. foreign policy for close to two decades as Jerusalem Bureau Chief or TheWashington Post, chief diplomatic correspondent for Bloomberg News, Washington bureau chief for the Jerusalem Post and as a reporter for Reuters. She appears regularly on cable news and radio programs as a Middle East analyst and writes regular commentary for the San Francisco Chronicle, Slate and other publications.

Located in the heart of Silicon Valley, the Stanford Journalism Program is focused on multimedia storytelling, immersive journalism and data-driven reporting. It is also the home of the Stanford Computational Journalism Lab. Follow on Twitter: @StanfordJourn.

This post originally appeared on Medium.

Leave a Reply