Remember LimeWire, the free peer-to-peer file-sharing software Generation X and Millennials used to disrupt the music industry in the early 2000s? Because of Lime Wire’s technological advancements, music labels reluctantly had to change their business models and evolve with the appearance of new technology as society entered the new millennium. The technological evolution that began with such technologies forever changed how music is consumed and jumpstarted ethical conversations on the value of the art that musicians produce.
Similar to developments decades ago, a new kind of technology is reshaping the music industry today: artificial intelligence.
Like LimeWire in the past, AI-generated music is shifting the landscape in both the music and technology industries. However, the origins of AI-generated music aren’t as recent as many may think. The first use of computers to assist in the creation of music dates back to the mid-1950s, when computer scientists began exploring algorithms for musical composition. One of the early results was The Illiac Suite, drawing its name from the ILLIAC I computer that Lejaren Hiller and Leonard Isaacson used to create it in 1957. Today, it is widely regarded as the first electronically composed musical score.
The evolution of AI music: A company focused on human ability and performance
Fast forward to the present day, as AI music companies are emerging around the world, many of them employing novel approaches to the application of machine learning to music composition.
Co-founded by Canadian-British entrepreneur, tech executive, and former Top 10 Canadian EDM DJ Neel Lee Chauhan and his two sons, Evan C and EC^2, MNDLB5 (pronounced Mind Labs) was launched in partnership with American music producer Andrew Hogarth. The idea came to Neel when he noticed that his sons performed better in sports while listening to their favorite EDM tracks.
“I was with my boys and they both play ice hockey as good Canadians,” Chauhan recently reminisced during an on-camera interview with The Debrief. “We live here in the New York area, and they both play for the Junior Devils, and what I noticed [was] they were listening to a lot of music on their way to games, and they started to put headphones in.”
“I noticed that whenever both my kids were listening to their favorite EDM tracks, which they’re big fans of, they would skate faster with better focus and accuracy,” Chauhan said.
Following his intuition, Chauhan assembled a team of alumni from Google, Salesforce, and Toronto Metropolitan University. Together, they conducted an academic literature review, which confirmed that music could enhance human focus, engagement, and performance.
“The research overwhelmingly over multiple decades says that music does drive human performance, focus, and engagement, including in sports, in work, coding, and countless use cases. So that became the researcher and scientific foundation for the founding of MNDLB5 (Mind Labs), and the name is no mistake,” Chauhan said.
“Artificial intelligence and other modern technologies come in, [and] we’re using those technologies to accelerate the impact and the ability to an efficient and creative way, produce music that drives human performance,” Chauhan also added.
In the past year, MNDLB5 has launched six new electronic dance music (EDM) tracks that utilize AI to elevate music production and human performance. These six tracks represent some of the earliest electronic dance music creations developed with AI, including generative AI like ChatGPT, in a comprehensive production process that encompasses songwriting, vocal translations, sound effects, cover art, and social media videos.
“MNDLB5 brings together a world-class team of electronic music artists combined with experts in Artificial Intelligence with the goal of engineering Electronic Dance Music that drives human performance,” Chauhan says. “The opportunity to make history driving the evolution of Electronic Dance Music gets me excited about MNDLB5,” said Andrew Hogarth, who is the lead producer and sound engineer for MNDLB5.”
The Jump Lab Mix, featuring AI-enhanced vocals by Evan C and EC^2, was released on March 15. A new track titled Toxic Lab Mix is set for release on Friday, March 22, during Miami Music Week, which will showcase an “AI Duet” between Canadian singer Ricki Ayela and the AI-generated voice of MNDLB5.
Neel began his career as an EDM DJ in Toronto, where he headlined underground raves and big room clubs alongside world-renowned electronic DJs like Deep Dish, Roger Sanchez, Armand Van Helden, “Magic” Juan Atkins, DJ Sneak, and Derrick Carter. His performances were regular features on CTV’s Much Music, Canada’s premier music channel. Neel has since transitioned into a tech executive based in New York, holding leadership positions at Google, Nectar Loyalty (Acquired by AIMIA), and Yodle (Acquired by Web.com).
Celebrity musicians and industry leaders sound off on Generative AI music
Improving human performance is one way AI companies are revolutionizing the industry. However, to have a better lay of the land and hear various voices of the industry, The Debrief reached out to a noted musician and industry leader from various ends of the field to gather their thoughts on AI-generated music.
DMC (Darryl McDaniels) from the legendary hip-hop group Run-DMC is known for his distinctive voice and iconic contributions to the group’s pioneering rap style. Alongside Joseph “Run” Simmons and Jason “Jam Master Jay” Mizell, Run-DMC was instrumental in the early development of hip-hop music and its surrounding counterculture.
Speaking with The Debrief, DMC was candid about his views regarding the use of AI in the composition process.
“I do not like the idea of AI being used for music creation,” DMC told The Debrief in an email. “There’s no reason for it at all because there are too many great musicians who are alive and real. AI should only be used to play or playback real music.”
“The music industry shouldn’t evolve to using AI to CREATE music! They can use it as an assistant for other things in the industry and business but not to create,” DMC added. “There are TOO MANY great songwriters, singers and musicians! AI can be used in industry, but not to create. Plain and simple!”
It’s not only musicians who are following the evolution of AI-generated music but also music industry professionals behind the scenes, many of whom are working quickly to evolve their approaches as the industry rapidly changes. Music professionals, including managers and producers, marketers, and others in the industry, have expressed differing opinions on AI technology being used. Some of them view such advanced technologies as a helpful tool that can potentially advance artists and the industry if used ethically.
“I think AI is just another tool available to people, says Patrice Laflamme, a Juno Awards jury member and artist manager of top recording DJ and artist Domeno, along with Zombie Boy, Naya Ali, and Francesco Yate. “Having a tool cabinet full of tools doesn’t mean much if you don’t know how to use them. Even with all the tools available, someone who has never built a house cannot recreate exactly what they have in mind.”
“It is the same with songwriters and composers. We can provide them with all the tools possible, but if they do not know how to use them, they will not be able to fully execute their ideas. There was a time when recording music and composing music was much more complicated,” Laflamme told The Debrief.
“The recording process of analog recording, for instance, was longer, more complicated, and more costly compared to today. But at the end of the day, technological help does not apply only to music. It’s part of our lives, think about your car’s rear view camera, your GSP, the 1001 things you can do with your cell phone.”
“I mean, who still memorizes all their friends’ birth dates and phone numbers? Technology has already been beneficial to everyone, not just creators, whether we want to admit it or not,” Laflamme added.
“I therefore see AI as another tool to facilitate the work of creators,” Laflamme told The Debrief. “The people who will know how to use it properly might be able to do beautiful things with it. But in my opinion, artificial intelligence will not do all the work for you. At least not at a competitive level. I believe that AI alone will not replace humans because humans can feel emotions, innovate, and predict future trends.”
Ethical implications and considerations for generative AI music
In 2010, the Recording Industry Association of America (RIAA) asserted that LimeWire facilitated illegal music downloads for millions of users. The situation quickly led to multiple digital piracy lawsuits and, ultimately, the company’s closure. Modern-day conversations surrounding ethics in AI-generated music and ownership have been up for debate due to the accessibility of AI tools to share, create, or replicate any type of sound available. AI-generated music raises several ethical implications for the music industry and artists.
Several key considerations exist, which include:
Fairness and Diversity
AI algorithms are only as good as the data they are trained on. AI-generated music could potentially reinforce biases from its training data, raising concerns about fairness and diversity in music creation.
Transparency
Another crucial component of Al-generated music and the ethics surrounding its creation involves transparency in terms of the musical composition process. This includes data sourcing, questions over the use of copyrighted music, training models, and human involvement in creating the music. Consideration of these factors will help users and creators understand the technology better while building trust with the audience and fellow musicians.
“At the heart of the ethical use of AI in music production is transparency,” reads a statement on the Soundful website, an AI music-generated website and service. “As we stand at this pivotal juncture, it’s crucial to navigate the ethical dimensions of AI-generated music with a clear vision and purpose.”
Hip-hop legend McDaniels (DMC) adds, “The only ethical use for it is to make available real music by real creators. There is no ethical reason for AI to create music at all, when once again… there are so many musicians who can do it.”
Intellectual Property (IP)
AI-generated music blurs the lines of ownership and authorship. Who owns the rights to AI-generated music, the creator of the AI or the user who operates it? This raises several questions related to copyright and intellectual property rights.
Laflamme stresses the point of copyright in AI works, saying that “Identifying all the legal aspects of copyright and ownership will be necessary. Who owns the creation? Is the person who came up with the idea and used AI for it or the one who created the AI software? At the moment, many AI softwares possess complete copyright and ownership of their software creations. Certain companies can grant copyright and ownership, but it’s important to read the licenses carefully.”
Understanding the legalities within any industry is a must, especially when they relate to copyright and IPs. “Some companies offer licenses that are royalty-free, while others will not be royalty-free,” Laflamme told The Debrief. “Many legal issues arise because many users do not take the time to read their user licenses or simply do not understand their content.”
Creativity and Authenticity
AI can replicate virtually any style or genre of music, which could potentially lead to the industry being saturated with relatively similar-sounding music. If used differently, however, AI could potentially lead to the creation of entirely new genres.
Either scenario challenges the notion of artistic creativity and raises questions about the authenticity of AI-generated music compared with human-created music. Could AI create entirely new genres of music influenced by AI thinking that evokes cultural change, given that music has been at the heart of many countercultural changes since the 60s? Or will AI create music just to oversaturate the music industry with a lack of substance?
Impact on Human Musicians
As AI becomes more advanced, some in the music industry have become concerned about its impact on human musicians. Will AI be used to replace human musicians and vocalists in certain roles, such as session musicians, singers, or composers, leading to job displacement?
“Right now, if you want to create a vocal track, whether that be elect, you know, EDM track, house music track, even a pop track, you could create a track without having to actually access a professional vocalist,” Chauhan said in his interview with The Debrief. “And as a result of that, more people will create. We’ll create vocals. More people will create music.”
He adds, “The interesting twist to all this as we’ve been growing Mind Labs is that ultimately, even if you create a full AI track with a purely algorithmic developed voice, you’ll ultimately need a human, an amazing vocalist, male, female or otherwise to perform that track. And I think it’s quite well known that performance is where monetization has been for music in live performance.”
Debating the need for separate awards shows for AI-generated music
In addition to ethical concerns, award ceremonies like The Grammys and The Junos have opened the door for certain aspects of AI generated music to qualify for awards submissions and recognition. Should there be separate awards shows for AI-generated music?
The Juno Awards’ Canadian Criteria guidelines state that AI-generated music refers to music that is composed, produced, or generated by artificial intelligence systems. These systems use algorithms and machine learning techniques to analyze vast amounts of existing music data and generate new compositions based on patterns, styles, and structures found in the data. Elements of AI can be used in eligible projects but cannot be the sole or core component.
The guidelines also state that humans must be the primary creative contributors to the submitted work. AI contributions cannot solely be responsible for an eligible project meeting the category eligibility.
Presently, only humans are eligible to be nominated, and a work that contains no human authorship is not eligible to be nominated in any JUNO Award category. Applicants must be able to provide proof that a human is the primary creative contributor to the submitted project.
In June 2023, Exclaim! released an article after the Grammy’s deliberation on the eligibility of ‘Heart on My Sleeve,’ the AI-generated collaboration between Drake and The Weeknd, for Best Rap Song and Song of the Year. The JUNO Awards in Canada aligned with the US Recording Academy, noting that top honors should be reserved for art created by human artists.
Last year, the Grammy Awards also updated their music submission policy, and Harvey Mason Jr., CEO of the Recording Academy, which presents the Grammy Awards, explained to CNBC at the Fast Company Innovation Festival 2023, “If there’s AI performing the song, but there’s humans writing it, then it’s eligible for a writing category. If AI wrote the song but a human singer … is singing the song, then it’s going to be eligible for a performance category.”
He also added at a conference, “No, we’re not going to award AI creativity. But we’re not going to disclude or disqualify the creators working with it.”
When Run-DMC’s McDaniels was asked if AI-generated work should have its own award show, he told The Debrief, “Of course, there should be a separate award show for AI-created music, but not for AI-created artists. The AI artist cannot even be mentioned in the same space as us human artists. The award show should be for the human being who created the AI artist and music. Not an AI artist!”
“The award goes to the person behind the creation. Not the AI-created artist.”
Laflamme expressed a different perspective to that of McDaniels. “We’re not talking about songs completely created by AI. There are no categories for songs that have multiple songwriters and/or composers, there are none for songs that have royalty free sample loops, there are none for songs that use plug-ins to correct the quality of the artist’s vocals, etc. Yet, everything I have just listed helps the performance of several artists.”
The Future of AI Music Composition
The future of AI-generated music lies in the hands of the music industry, AI technology, and the creators themselves. However, looking at this topic from a bird’ s-eye view, the democratization of music creation has the potential to enable individuals to produce music without requiring extensive training or costly equipment. Nonetheless, it also sparks questions and concerns about the artist’s role and the worth of human creativity in the music sector.
Accessibility is the name of the game for Laflamme, and potentially many other music creators he represents. “Like all technological tools before its arrival, AI will surely make creation more accessible. That will make it even harder for the record labels. Right now, there are too many offers and not enough requests. The number of songs that come out every day and are completely under the radar is already almost incalculable. I think the amount of releases per day will increase even more and it will be much harder to stand out. Record companies will need to continue to rely on available analytics even if they don’t always highlight the best talent,” Laflamme notes.
Yet, it is the music industry and its creators who hold the future, shaping and establishing guidelines for the upcoming generation of music makers. In his email to The Debrief, McDaniels concluded saying, “A lot of fake shit is going to be going on! But to each his own.”
“Real recognizes real,” he added.
Chrissy Newton is a PR professional and founder of VOCAB Communications. She hosts the Rebelliously Curious podcast, which can be found on The Debrief’s YouTube Channel. Follow her on X: @ChrissyNewton and at chrissynewton.com.