top of page
Businesswoman with Mask
cloud.png

News & Resources

Stay informed with our latest news and resources from our experts

Musical Misconduct

Tech News : 11th September 2024

Musical Misconduct


In a first-of-its-kind case, a US musician has been charged with fraud for allegedly using thousands of automated bot accounts to stream AI-generated tracks from which he made more than $10m in royalty payments. 

Which Tracks? 

The music tracks that 52-year-old Michael Smith from North Carolina in the US allegedly used came from a co-conspirator, a music promoter, and the CEO of an AI music company, who (from 2018) supplied him with hundreds of thousands of AI-generated songs – songs described as “instant music” by the alleged co-conspirator.

featured.jpg

Uploaded To Music Streaming Platforms 

Smith then allegedly uploaded these tracks to music streaming platforms like Spotify, Apple Music, Amazon Music, and YouTube Music. Typically, when songs are uploaded to music streaming platforms, the artists earn royalties based on the number of streams their songs receive. 

 

Then Used Automated Bots To Inflate The Number of Streams 

In the case of Mr Smith, the allegation is that he then used “bots” (automated programs) to stream the AI-generated songs billions of times. The indictment says that, at the height of his alleged fraudulent scheme, Mr Smith “used over a thousand bot accounts simultaneously to artificially boost streams of his music across the Streaming Platforms”. It’s alleged that by manipulating the streaming data in this way, Smith was able to fraudulently obtain “more than $10 million in royalty payments to which he was not entitled”

How Royalties Work Via Music Streaming Platforms 

Royalties paid to songwriters, composers, lyricists, and music publishers (“Songwriters”) are funded by streaming platforms like Spotify and Apple Music. These platforms allocate a percentage of their revenue (called the “Revenue Pool”) to performance rights organisations (PROs) and the Mechanical Licensing Collective (MLC). PROs manage performance royalties, while the MLC handles digital mechanical royalties for reproducing and distributing songs. The streaming platforms send both streaming data and revenue to these organisations, which then distribute royalties proportionally to the Songwriters based on the number of streams their songs received. 

Similarly, performing artists and record companies (“Artists”) receive royalties from a separate pool, also funded by a percentage of streaming platform revenues. These funds are allocated based on the total number of streams each artist’s recordings receive, and the royalties are typically paid to Artists through record labels and distribution companies. 

Why Fraud? 

Streaming fraud, using bots to inflate stream numbers, diverts royalties from legitimate creators to those engaging in fraudulent activity. In this case, the allegation is that Michael Smith committed fraud by making false and misleading statements to streaming platforms, the above-mentioned performance rights organisations (PROs), and music distribution companies. It’s been alleged that his intent was to conceal a massive streaming manipulation scheme, where he used bots to inflate the number of streams for AI-generated songs. By doing so, prosecutors say that Smith used deceptive practices, to fraudulently divert royalties meant for legitimate creators who earned their revenue through real consumer engagement / real listeners (not automated bots). 

Technology Improved Over Time 

Emails obtained from Smith and other participants in the scheme, also appear to show how the technology used to create the tracks improved over time, thereby making his scheme more difficult for the streaming platforms to detect. For example, an email from February shows Mr Smith claiming that his “existing music has generated at this point over 4 billion streams and $12 million in royalties since 2019.” 

Not The Only Case Of This Kind 

Although prosecutors in this case have described it as the first criminal case of its kind, it’s not the only music platform streaming fraud case of recent years. For example:  

– The Danish executive case (2024) where a Danish executive got an 18-month prison sentence after using bots from 2013 to 2019 to inflate streams on platforms like Spotify and Tidal, earning around $635,000 in fraudulent royalties. 

– The Boomy AI fraud incident (2023) where Boomy, an AI music startup, had millions of its tracks blocked by Spotify due to suspected bot-driven streaming fraud, leading to increased scrutiny of AI-generated music on platforms.

 

– The Tidal fake streams investigation (2019), where Norwegian authorities investigated Tidal (a global music streaming platform) for allegedly inflating streams for artists like Beyoncé and Kanye West by hundreds of millions, resulting in massive royalty payouts and one of the largest streaming fraud cases to date. 

Other AI-Related Music Incidents of Note 

It’s not just using bots to inflate streams on platforms that have caused AI-driven problems in the music world. For example: 

– In 2023, a song titled “Heart on My Sleeve” featuring AI-generated voices that mimicked ‘Drake and The Weeknd’ (a Canadian singer/songwriter) went viral on platforms like TikTok and Spotify. Created by a user named Ghostwriter977, the track accumulated millions of streams before being pulled from streaming services following a complaint from Universal Music Group (UMG). UMG argued that the AI technology used to clone the artists’ voices breached copyright law and harmed the rights of real artists. Despite its removal, the incident highlighted growing concerns over the use of AI in the music industry and its potential legal implications. 

– In April this year, over 200 prominent artists including Billie Eilish, Chappell Roan, Elvis Costello, and Aerosmith, signed an open letter calling for an end to the “predatory” use of AI in the music industry. This letter, coordinated by the Artist Rights Alliance, highlighted concerns that AI technology is being used irresponsibly to mimic artists’ work without permission, undermining creativity, and devaluing musicians’ rights. The artists warned that AI models are being trained on their copyrighted work without consent, with the potential to replace human artistry and dilute the royalties that artists depend on. They called for developers and platforms to commit to avoiding AI usage which infringes on artists’ rights or denies them fair compensation.

 

Can Tech Firms Steal Your Voice?

 

In an interesting AI-related case of a notable class action lawsuit filed in 2024, voice actors Paul Skye Lehrman and Linnea Sage accused AI startup Lovo of illegally cloning and selling their voices without consent. The pair were originally contacted via Fiverr in 2019 and 2020, where they were asked to record voiceover samples for what they were told were “academic research” or radio test scripts. Lehrman was paid $1,200, and Sage $400, with both assured that their recordings wouldn’t be used for anything beyond these stated purposes. However, they later discovered their voices had been cloned using AI and used in commercial content without permission. 

However, much to Lehrman’s surprise and shock, he heard his voice on a YouTube video about the Russia-Ukraine conflict, discussing topics he had never recorded. The irony of his situation deepened when he heard his voice again on the podcast “Deadline Strike Talk,” where his AI-generated voice was used to discuss the impact of AI on Hollywood and the ongoing strikes, i.e. issues central to the lawsuit itself! Sage similarly discovered her voice in promotional materials for Lovo. The lawsuit claims that Lovo misappropriated their voices to market AI-generated versions under pseudonyms, “Kyle Snow” and “Sally Coleman,” which damaged their careers by reducing job opportunities and potentially replacing their work entirely with AI. 

This lawsuit highlights a growing concern in the entertainment industry about AI’s unchecked use to clone voices and likenesses without authorisation, raising issues of intellectual property, consent, and fair compensation.

 

What Does This Mean For Your Business? 

The rise of AI in the music and entertainment industry introduces both exciting opportunities and serious risks for music streaming platforms, artists, and individuals whose voices or music may be used without consent. For streaming platforms, cases like Michael Smith’s alleged fraudulent streaming manipulation expose real vulnerabilities in royalty systems, which requires platforms to implement more robust detection methods. As AI-generated content becomes more sophisticated, distinguishing between real and artificial streams will be crucial to prevent fraudulent activity that undermines royalty distribution and trust. 

For artists, AI’s ability to clone voices, styles, and entire songs presents an existential challenge to creativity and ownership. The growing number of cases, including the Heart on My Sleeve incident and the lawsuit against Lovo, highlight how AI can be used to replicate an artist’s voice or music without permission. This threatens not only their revenue but also their creative integrity. This illustrates why prominent artists, as seen in the open letter signed by Billie Eilish, Chappell Roan, and others, are calling for clearer protections and industry standards i.e., to prevent AI from being used in ways that exploit human artistry without proper compensation. 

Voice actors and other professionals who rely on their vocal talents are particularly vulnerable to AI voice cloning. Lehrman and Sage’s experience with Lovo illustrates how voice recordings can be misappropriated and used commercially under false pretenses, damaging careers and reducing future opportunities. This case highlights the need for businesses, especially those in the tech and entertainment sectors, to perhaps develop transparent and ethical policies around AI-generated content, thereby ensuring that creators are properly informed, compensated, and protected. 

Beyond the entertainment industry, AI misuse poses a potential risk for the rest of us, especially when it comes to the unauthorised use of voices or faces. AI technology, like voice cloning and deepfakes, can be used to imitate individuals without their consent, creating the potential for serious ethical and legal challenges. For businesses, this means increased vulnerability to fraud, such as the possibility of AI-generated voices being used to impersonate employees or executives in phishing scams. Without proper safeguards, AI can become weaponised to deceive customers or commit fraud against organisations by replicating voices or faces in ways that can bypass security measures, leading to financial and reputational damage.

In response to these growing concerns, industry experts and creators are calling for stronger regulations and protections. Clear consent processes, the development of intellectual property rights linked to a person’s voice and likeness, and technological solutions for detecting fraudulent AI usage now appear to be essential. Ideally, companies and platforms now need to collaborate with policymakers and rights organisations to try and ensure that AI is used ethically, protecting the creative economy and the rights of individuals.

City Skyline

For more information on our services give us a call on 01603 859669 or send us an enquiry

bottom of page