On April 7, 2018, a chemical bomb was dropped onto the rooftop of a residential building in the Syrian city of Douma, near Damascus, killing dozens of people, many of them children. While the Syrian regime flatly denied that it had launched the attack, the United States and its allies claimed that a chlorine bomb had been dropped by the Syrian government.
In an attempt to get to the truth of the attack, The New York Times last month published an interactive investigative story that reconstructs the virtual scenes of the chemical attack and walks the readers through the key pieces of evidence – all in augmented reality.
Storybench spoke with Graham Roberts, the director of immersive platforms storytelling at The New York Times, to understand how new technologies like AR can be used for powerful news stories.
How did this project get started?
We have a visual investigation team and they’ve been doing a number of these projects that look at multiple videos from multiple perspectives and they reconstruct scenes. They are using visual media as a basis for understanding and discovering things that have occurred.
We had taken a look at the video they were working on and it dawned on us that this is a spatial three-dimensional story. I mean, all the evidence that we’re looking at here is based on where things were arranged and found in a particular state. In this case, it was the balcony where the [chemical] weapon was found. It was just one of those things where we thought, wow, this would just be perfect to explore using augmented reality or virtual reality or whatever you may like to call it.
Seeing that there are three-dimensional elements to some of these videos, we started the process of looking for newsy topics that we could approach with this new augmented reality technology and started to have conversations with that team. We became aware that they were looking at this story on the chemical weapons attack in Syria and were working directly with forensic architecture who are a university research group at the University of London as a way of understanding the story.
You can examine the balcony and the evidence it contained for yourself using our AR experience in the NYT app or on desktop here. Spatial analysis by @forensicarchi and @nytgraphics https://t.co/ZbbA1v88N9 pic.twitter.com/dpV7E0lGaO
— Malachy Browne (@malachybrowne) June 27, 2018
Tell us a bit about the user experience.
One of the things we’re doing with augmented reality is breaking out of this very limited mobile, two-inch screen and, of course, we can explain these points to you on that screen, and we do but when we can treat the phone as a window instead of a surface and combine that with this technology, we can put you at the scene which allows us to give you a much more intuitive, spatial understanding of what’s going on. And then, on top of that, we add interactivity and create a mode in which you are almost the investigator. We can guide you through the scenes and you can see first-hand all the evidence that we are able to present in the story.
How do you determine if a particular story lends itself to AR, VR or interactive virtual reality?
It’s a good question. The determination is based on a number of factors but one of them, especially for augmented reality, is the spatial understanding. So, in the Douma scene, we give it to you in the real scale. You understand the space intuitively by just moving around the scene, and can understand exactly what the scale is. This is difficult to do without this feature.
We did a Guatemala piece a few days before this, where we were able to capture a scene of some of the destruction due to the volcano using over 700 photos. We can bring to you that entire scene and, if you have the space, you can display the whole scene in real scale in front of you and walk around it. That’s an incredible thing that was just impossible to do before. So, understanding the scale of things relative to your space is really important. When we see that something can be depicted that way, that’s a good indicator that this could be a good use case for AR.
A volcanic eruption wiped out much of this Guatemalan village, leaving a current of gases, sand, ash, rocks and tree trunks. See the scale of the destruction in augmented reality: https://t.co/J6yS4ycnvz pic.twitter.com/9q6Hp3XMRs
— New York Times Video (@nytvideo) June 20, 2018
You could say almost anything three dimensional has the potential to benefit from this. I mean, we are three dimensional beings, right, in a three-dimensional world. My argument would be that the best possible way to understand a three dimensional object or story is to put it in front of you and let you interact with it as if you were there, as opposed to looking at something on your phone, which represents a more abstracted sense of the object.
We often hear the talk that future of news media is AR/VR journalism. What are your thoughts? Do you think we’re moving in that direction?
The problem with this question is that when people picture VR journalism, I think what they’re picturing is a big, black box strapped to someone’s face and that’s how we’re all going to interact with things. That’s just the state of the art, that’s just the state of the technology, that is very early and is very much an idea or an experiment of the way things may be. I think what the real question is is how are we going to, in the near future, interact with the media?
[perfectpullquote align=”right” cite=”” link=”” color=”” class=”” size=””]“What we are seeing now are early experiments with this.”[/perfectpullquote]
I’m not a fortune-teller but a lot of where the technology points is that we are at the very beginning of the next phase of computing – which is spatial computing – and VR and AR and all these kinds of technology are a part of that. What we are seeing now are early experiments with this in which we start to understand what it means to have a more physical interaction with it[media] but I think what we’re really talking about is the continuous merging of our physical and digital lives.
Right now, we are in a physical world and we have this glass rectangle that we carry around that we are inseparable from. It’s so necessary but it’s also awkward, if you think about it, everyone walking down the street, staring at that little rectangle, not even looking where they’re going, it’s not a human way of interacting with information. So, spatial computing is this idea that, well, now this technology kind of exists to put this information directly into the world in front of us. We don’t have the form factors yet, the hardware is definitely not there yet, so what we are doing on the phone is now leaning into that future to understand, what if we wouldn’t have to interact on this square little screen and there’s a different kind of way that digital information can be understood? My optimistic side is this will [help us] have a more human kind of interaction with the information we consume.
Do you believe that using AR or VR is suitable for certain beats more so than for others? Say in war reporting or environmental stories?
It’s hard to answer singularly just because it depends on: Are we creating something or are we capturing something? So those are two very important distinctions.
If we start with capturing, then we’re talking about the video world of this and you know there’s a long way to go as far as quality is concerned. But what I can say about it is that it’s an important journalistic tool. If you forget about the terminology and everything, what we’re doing is we’re capturing the entire scene from a point of view and that’s a capability that seems to be incredibly important to journalism because it increases transparency.
[perfectpullquote align=”left” cite=”” link=”” color=”” class=”” size=””]“If you can just capture the entire scene then there can be no question of, well, was this misrepresented. You’re showing all of it.”[/perfectpullquote]
If you’re just shooting in one direction, you’re making very important editing decisions at that exact moment when you’re including or excluding things from the frame, right, and you’re doing it on the fly. But if you can just capture the entire scene then there can be no question of, well, was this misrepresented because the camera was pointed this way and it didn’t show us that? You’re showing all of it. So, I think that is important in any kind of thing that you’re covering.
As far as creating, you’re talking about the world of explanatory graphics. When a thing is best understood spatially, then it will always be advantageous to demonstrate it through a VR-kind-of-technology. There also, of course, is the question of engagement. Getting people to care is an issue and if you can create incredibly engaging experiences that bring people in and make them interact, I think that can be very important.
What are some of the challenges you encounter when using new technology especially for immersive storytelling?
I think the biggest challenge is probably the audience. The point is to reach a large audience but sometimes the most innovative work we can do is not always where the audiences are, it’s usually the opposite. So, there can be some friction. We try to experiment across platforms, maybe not where we have big audiences but where at least some of our audiences might be so we can learn a bit and try out the things that are really interesting to us.
That’s actually been one of the most amazing things about this year. We started at the beginning of 2017 experimenting with what we called a modern AR, which is non-target based AR – AR that is free of having to point your phone at something. We used Google Tango as a way to learn how to tell stories in a spatial way using augmented reality, and we could apply what we learned in that experimentation to what we do now leveraging ARKit and ARCore and reach a substantial audience. So now, we could take some of what we learned and literally publish it to our readers and that’s what we’ve been doing since February. It’s incredible how quickly that happened.
That’s the things that people often forget that this is not linear. That these kinds of changes are exponential and that capability happened on devices that people have owned for a few years now.
How has your audience response been to some of the AR stories that you have published?
Well, I think it’s been really strong. People are really seeing The New York Times as the leader in this, adding seriousness to this and applying the sort of rigor that we do with everything else. Making the argument, I’d say that this technology doesn’t have to be used for dancing hotdogs or something silly, you know, it is such an incredible technology and we can layer something of great value on top of it.
I feel it’s still early and so it’s hard to have a real sense of this. And there are still limitations in who can see this work, they must have a New York Times app, a supported device, the right version of the OS, give camera access – so it’s a smaller subset of our readers for sure who are getting the full designed AR stream. But from what I see on social media, it seems very positive and I think people are recognizing it. Some of my favorite comments are, This is why The New York Times is so important because they put their resources into this kind of work in the journalism arena.
The @nytimes continues to provide powerful, room-scale, AR experiences to tell stories with their latest piece in which the technology was used to uncover war crimes committed by the Syrian government near the capital of Damascus.https://t.co/vviNfITpSO pic.twitter.com/BUDQTSxSN4
— WITHIN VR/AR (@WITHIN) June 26, 2018
Moving forward, what will be the strategy for The Times?
What we’ve done is try to move quickly and make this work so we can publish things quickly. But now, I think it’s time to step back. Even in the last five months we’ve seen that the technology has evolved in ways we want to take advantage of. We built a very, sort of, MVP [minimum viable product] version of it but now we want to bring in those new technologies. We want to look into how we designed our approach and where we can get rid of some friction and how we can redesign it and make really the version 1.0. I’d say now we have the beta version, so I think in the next half of the year we’ll be revisiting all the decisions we’ve made and try to improve on that.
Any last thoughts on this topic that you’d want to share with our readers?
I think people need to understand that this is not the arrival point. This is the very beginning of all of these things. These are all touching on the way how we may interact with information in the future, in like a profoundly different way. So, if you look at an Oculus strapped to someone’s face you may think, is that the way we’re going to try and interact with the world? No. I think that’s missing the point. All these hardware devices and what we can do on them for storytelling is amazing already but it’s not fully formed yet. It’s going to require some imagination in looking forward and answering the question, Is this where things are going and how much should we be serious about our investment and time put into it?