back to top

Top 5 This Week

spot_img
spot_img

Related Posts

Man Allegedly Used Bots and AI to Cash In on Streaming Revenue

In a groundbreaking case, a North Carolina musician, Michael Smith, has been accused of using artificial intelligence (AI) and thousands of bots whilst illegally streaming songs billions of times, pocketing millions in royalties.

This case, prosecuted by federal authorities, marks the first of its kind, setting a precedent for how AI-generated music fraud might be tackled in the future. Smith, now facing serious charges of wire fraud, conspiracy to commit wire fraud, and money laundering, could potentially spend decades behind bars if convicted.

The scheme, which allegedly involved the manipulation of AI-generated tracks, resulted in Smith earning over $10 million in royalty payments over the course of several years. Damian Williams, the U.S. Attorney, didn’t mince words, saying, “Through his brazen fraud scheme, Smith stole millions in royalties that should have been paid to musicians, songwriters, and other rights holders whose songs were legitimately streamed.” This fraud not only hurt the music industry but also undermined the very people who work hard to produce legitimate, authentic music.

The charges were made public after an unsealed indictment revealed the depths of Smith’s operation. He’s accused of utilizing hundreds of thousands of AI-generated songs, which were streamed by automated bot accounts. These bots allowed him to avoid detection for a considerable amount of time while generating billions of fake streams. Smith’s scheme, which began around 2018, reportedly involved as many as 10,000 active bot accounts at its peak.

But Smith didn’t act alone. Prosecutors allege that his operation was supported by the CEO of an unnamed AI music company. This individual allegedly supplied Smith with thousands of AI-generated tracks each month. In return, Smith provided metadata about the songs, such as artist names and track details, and shared a portion of the streaming revenue. In an email disclosed in the indictment, the co-conspirator described the scheme as “instant music,” admitting that what they were doing wasn’t truly music, but rather a product churned out to exploit the system.

FBI Acting Assistant Director Christie M. Curtis weighed in, asserting that Smith would “face the music” for his fraudulent activities. She emphasized the FBI’s commitment to cracking down on individuals who exploit advanced technology for illicit profit. This case, according to Curtis, highlights the importance of protecting genuine artistic talent from those who manipulate emerging technologies for personal gain.

The depth of the scheme became even clearer in emails obtained during the investigation. In a message from earlier this year, Smith bragged about his success, claiming that his AI-generated tracks had already garnered over 4 billion streams and netted $12 million in royalties since 2019. These figures underscore just how widespread and lucrative this fraudulent operation became before the law caught up with him.

The indictment also highlights how rapidly AI technology has advanced, making it increasingly difficult for streaming platforms to detect fraudulent activity. As the technology improved, so did the sophistication of Smith’s scheme. While platforms like Spotify, Apple Music, and YouTube have worked to combat fraudulent streams, this case demonstrates just how complex and pervasive the issue has become.

To address the growing threat of artificial streams, Spotify recently implemented stricter policies. The streaming giant announced in April that it would begin charging labels and distributors for artificially inflated streams and set higher thresholds for royalty payouts. Additionally, they extended the minimum track length required for noise recordings, such as white noise, to prevent manipulation.

The rise of AI-generated music isn’t just causing headaches for streaming platforms—it’s also sparking outrage among artists and record labels. With the growing availability of free AI tools to create music, concerns are mounting about the fair distribution of profits. Many artists feel that their work is being used without proper recognition or compensation. These concerns came to a head in 2023, when platforms rushed to remove a viral track that cloned the voices of superstars Drake and The Weeknd.

Earlier this year, some of the music industry’s biggest names—including Billie Eilish, Elvis Costello, and Aerosmith—signed an open letter calling for an end to the “predatory” use of AI in the music business. These artists argue that AI technology is not only infringing on their creative rights but also posing an existential threat to the industry as a whole.

Smith’s case could be the first of many as authorities and the music industry grapple with the rise of AI-generated music and its potential for exploitation. This landmark prosecution underscores the need for stronger safeguards and regulations to protect the future of music in an age where technology is advancing at breakneck speed.

Popular Articles