“AI is transforming music faster than we think.” The Washington Post’s Yan Wu explores the intersection of AI and creativity.
As artificial intelligence continues to reshape various industries, its influence on the music world has been profound, sparking excitement and controversy. AI-generated music tools, such as those developed by companies like Suno, are making waves in the creative process. However, they also raise questions about authorship, copyright, and the future of musical identity.
Yan Wu, opinion graphics reporter at The Washington Post, front-end developer, and Northeastern University alumna with a deep passion for storytelling, has been closely following these developments. In her latest immersive journalism story, “Why Musicians Are Smart to Embrace AI,” she discusses how AI is affecting the music industry.
Storybench interviewed Wu via email to discuss her experiences with AI tools, her thoughts on how AI impacts creativity, and her advice for those covering this rapidly evolving field.
What inspired you to start exploring the impact of AI on music?
Initially, I was worried that AI would replace creative jobs, including my own. I had been reporting on AI tools, and one of my earlier stories focused on AI image generators. I discovered that AI could actually elevate the creative process, helping artists in unexpected ways. At an AI conference, I met Eric Lyon, a composer and professor at Virginia Tech, who was using AI to inspire his music projects. His approach intrigued me, and I started exploring how AI was being used in music creation. That eventually led me to experiment with tools like Suno.
How does AI challenge traditional concepts of authorship and musical identity?
AI music companies like Suno are not fully transparent about how their models are trained or what data they use. These companies often claim “fair use” when it comes to using pre-existing music to train their models, but musicians—particularly those whose work is being used without permission—see it differently. This is why major players like Universal Music Group, Sony Music Entertainment, and Warner Music Group are now suing AI music apps like Suno and Udio, accusing them of copyright infringement.
Creativity often builds upon what has come before, but AI is different. It uses data, not personal experience, to generate new content. This blurs the lines of authorship and raises serious questions about who should get credit when AI is involved in creating a new piece of music.
Why did you include your personal experience with AI tools in your column?
Since it was an op-ed, I wanted to make the piece more personal. By sharing my own experience as an amateur music enthusiast, I hoped to connect with readers who might be skeptical about AI’s place in the creative process. Many people, including musicians, are still cautious about AI and the potential copyright issues it introduces. I wanted them to see AI’s potential before rushing to judgment, so including my experiments felt like a way to make the issue more relatable.
What advice do you have for journalists looking to get up to speed with AI tools?
Be open-minded and willing to try different AI tools, especially those that can improve productivity. My team and I have been using tools like ChatGPT for tasks like data analysis and coding. However, it’s important to verify the information these tools provide. For example, I used ChatGPT to brainstorm ideas for coding a design element in my story, but I still had to adjust and test the code myself.
My advice is to use AI as a research tool or learning resource, but don’t rely on it too heavily. It’s great for generating ideas and exploring new approaches, but you still need to do your own fact-checking and thinking.
What challenges do journalists face when covering rapidly evolving technology in the music industry?
The biggest challenge is keeping up with the pace of change. When I started my project, Suno was on its second version, and there hadn’t been any lawsuits. By the time I finished, Suno had released Version 3, and major lawsuits had been filed against it. The news in this space is constantly shifting, and that can make it difficult to keep a story relevant as developments unfold.
The key is to stay informed and find your own angle on the issue. For me, it was about combining my background in design and coding to create an interactive experience that helped readers engage with the topic in a new way. But for other journalists, it could be about finding unique perspectives on how AI is being used or its broader implications.
How did you approach the design of your AI-generated music piece to enhance user engagement?
For embedded audio, user experience is everything. I used bold, colorful buttons to encourage readers to click, and I made sure that once they clicked on the first track, the other audio files would autoplay as they scrolled. We also added a sticky control button so users could easily toggle the audio on and off as they moved through the article. The goal was to make the experience as intuitive and seamless as possible, ensuring readers remained engaged throughout the story.
As AI continues to revolutionize the music industry, it brings with it both opportunities and challenges. While the technology is still in its infancy, Yan Wu believes that those who approach it with curiosity and creativity will find new ways to push the boundaries of art and storytelling. For now, the debate around AI, creativity, and copyright is just beginning, but one thing is clear: the future of music is being reshaped before our eyes.