Can the music industry make AI the next Napster?

[TECH AND FINANCIAL]

Sure, everyone hates record labels — but the AI industry has figured out how to make them look like heroes. So that’s at least one very impressive accomplishment for AI.

AI is cutting a swath across a number of creative industries — with AI-generated book covers, the Chicago Sun-Times publishing an AI-generated list of books that don’t exist, and AI-generated stories at CNET under real authors’ bylines. The music industry is no exception. But while many of these fields are mired in questions about whether AI models are illegally trained on pirated data, the music industry is coming at the issue from a position of unusual strength: the benefits of years of case law backing copyright protections, a regimented licensing system, and a handful of powerful companies that control the industry. Record labels have chosen to fight several AI companies on copyright law, and they have a strong hand to play.

Historically, whatever the tech industry inflicts on the music industry will eventually happen to every other creative industry, too. If that’s true here, then all the AI companies that ganked copyrighted material are in a lot of trouble.

Can home prompting kill music careers?

There are some positive things AI music startups can accomplish — like reducing barriers for musicians to record themselves. Take the artist D4vd, who recorded his breakout hit “Romantic Homicide” in his sister’s closet using BandLab, an app for making music without a studio that includes some AI features. (D4vd began creating music to soundtrack his Fortnite YouTube montages without getting a copyright strike for using existing work.) The point of BandLab is giving more musicians around the world the opportunity to record music, send it into the world, and maybe get paid for their work, says Kuok Meng Ru, the CEO of the app’s parent company. AI tools can supercharge that, he says.

That use, however, isn’t exactly what big-time AI companies like Suno and Udio have in mind. Suno declined to comment for this story. Udio did not respond to a request for comment.

Suno and Udio are designed to let music consumers generate new songs with a few words. Users type in, say, “Prompt: bossa nova song using a wide range of percussion and a horn section about a cat, active, energetic, uptempo, chaotic” and get a song, wholesale, without even writing their own lyrics. The idea that most listeners will do this regularly seems unlikely — making music is more work than just listening to it, even with text prompts — as does the idea that AI will replace people’s favorite human artists. (Also, the music is pretty bad.)

“AI flooded the market with it.”

A lot of listening is pblockive consumption, like a person putting on a playlist while doing the dishes or studying, or a business piping background tunes to customers. That background music is up for grabs — not by consumers, but by spammers using these tools. They’re already generating consumer-facing slop and putting it on Spotify, effectively crowding out real artists.

That seems to be the major use case for these apps. Generating a two-minute song on Udio costs a minimum of eight credits; free users get around 400 credits monthly; for $10 a month, you’ll get 1200, the equivalent of, at most, 150 songs. Spotify Premium individual costs $12 a month and gets you just about everything ever recorded, plus audiobooks. Also, it takes many, many fewer clicks to listen to Spotify than it does to generate your own songs — so if you’re looking for something to listen to while you cook, Spotify is just easier.

But the math there changes if you’re looking for background music for your YouTube videos — or anything else that’s meant to be listened to publicly. That means AI music threatens people who support themselves by making incidental music for advertisements, or recording “perfect fit content” for Spotify, or other, less-glamorous work. Taylor Swift’s career isn’t endangered by AI music — but the real people who make the background music for Chill Beats to Study To, or the hold music you hear on the phone, are.

“I wouldn’t want to be [new-age musician] Steven Halpern and have my future career based on meditation music,” says David Hughes, who served as CTO for the Recording Industry Association of America (RIAA) for 15 years. He now works as a tech consultant for the music industry at Hughes Strategic. “AI flooded the market with it. There’s no business making it anymore.”

As in other creative industries, AI music tools are poised to hollow out the workaday middle of the market. Even new engineering tools have their downsides. Jimmy Iovine, who eventually founded Interscope Records and Beats Electronics, started his career as an audio engineer before making his name by producing Patti Smith’s Easter. This is kind of like starting in the mail room and becoming the CEO; if more of the engineering work is done by AI, that removes career paths. The next Jimmy Iovine might not get his start, Hughes says. “How does anyone apprentice?”

And it’s (possibly) illegal

About a year ago, the major labels brought suit against Suno and Udio. The fight is about training data; the labels say the companies stole copyrighted work and violated copyright law by using it to build their models. Suno has effectively admitted it trained its AI song generator on copyrighted work in do***ents filed in court; so has Udio. They’re saying it was fair use, a legal framework under which copyrighted work can be used to create new work.

Virtually every creative industry is in some kind of similar fight with AI companies. A group of authors is suing Meta, Microsoft, and Bloomberg for allegedly training on their books. The New York Times is suing Microsoft and OpenAI. Visual artists have sued Stable Diffusion and Midjourney; Getty Images is also suing Stable Diffusion; Disney and Universal are suing Midjourney. Even Reddit is suing Anthropic. Training data is at issue in all the suits.

“Thou shalt not steal.”

So far, the legal takes on AI have been contradictory, and at times, baffling. There doesn’t seem to be a consistent through line, so it’s hard to know where the law will ultimately end up. Still, music has its own legal history that comes to bear — from unauthorized sampling. That may mean it’s entitled to stronger protections.

In Bridgeport Music v. Dimension Films, a case about NWA’s sample of Funkadelic’s “Get Off Your Ass and Jam,” the US Court of Appeals ruled that the uncompensated sampling was in violation of copyright law. In the decision, the court found that only the copyright owner could duplicate the work — so all sampling requires a license. Some other courts have rejected that ruling, but it remains influential. There’s also Grand Upright Music v. Warner Bros. Records, in which the US Southern District of New York ruled that Biz Markie’s sample of Gilbert O’Sullivan’s “Alone Again (Naturally)” was copyright infringement. The written opinion in the case begins, “Thou shalt not steal.”

“Some of the sampling cases have suggested that sound recordings might be entitled to stronger protections than other copyrighted works,” says James Grimmelmann, a professor at Cornell Law School. Those protections may extend beyond sampling to generative AI, especially if the AI outputs too closely resemble copyrighted work. “From that perspective, music becomes kind of untouchable. You just can’t do this kind of work on it.”

Music is also complicated — since performances are bound up in rights of publicity. In the case of the fake Drake track, the soundalike may violate Drake’s right to publicity. Artists such as Tom Waits and Bette Midler have won suits against more mundane human soundalikes. Proving that someone meant to violate Drake’s right to publicity might be even more straightforward if the lawsuit contains the prompt.

This may be an easier case for music companies to make

As in other AI fair use cases, one of the key questions is whether a derivative work, such as “BBL Drizzy,” is intended to replace or disrupt a market for an original one. In 2023, the Supreme Court ruled that Lynn Goldsmith’s copyright had been infringed on by Andy Warhol when he screenprinted one of her photos of Prince. One of the key factors was that Vanity Fair had licensed Warhol’s work instead of Goldsmith’s — and she received no credit or payment.

In May, Register of Copyrights Shira Perlmutter released a pre-publication report that found that AI training in general was not necessarily fair use. In the report, one of the factors considered was whether an AI product supplanted the use of the original. “The use of pirated collections of copyrighted works to build a training library, or the distribution of such a library to the public, would harm the market for access to those works,” the report said. “And where training enables a model to output verbatim or substantially similar copies of the works trained on, and those copies are readily accessible by end users, they can substitute for sales of those works.”

This may be an easier case for music companies to make than, let’s say, ad writers. (What copywriter wants to admit they’re so uncreative they can be replaced by a machine, first of all?) Not only are there fewer of them, which allows them to easily negotiate as a bloc, it’s simple enough to point to the output of AI music singing Jason Derulo’s name, or mimicking “Great Balls of Fire.” That’s pretty clear-cut.

Another crucial factor — one that matters particularly to the music industry — was lost licensing opportunities. If copyrighted works are being licensed as AI training data, doing a free-for-all snatch and grab robs rights holders of their ability to participate in that market, the report notes. “The copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use,” the report says.

The RIAA alleges illegal copying on the front end and infringing outputs on the back end

Recently, Anthropic got a ruling in a copyright case that differs from this blockysis. According to Judge William Alsup of the Northern District of California, using books for training data is fair play — with two big caveats. First, any inputs must be legally acquired, and second, the outputs must be non-infringing. Since Anthropic pirated millions of books, that still leaves the door open for mblockive damages, even if using the books to train isn’t wrong.

When it comes to the Suno and Udio suits, the RIAA alleges illegal copying on the front end and infringing outputs on the back end, Grimmelman says. Suno and Udio can introduce evidence to rebut those allegations, but the ruling isn’t ideal to knock down the RIAA’s suit. It’s also not clear Suno can rebut those allegations. “Suno’s training data includes essentially all music files of reasonable quality that are accessible on the open Internet, abiding by paywalls, pblockword protections, and the like,” its lawyers wrote in the filing arguing Suno’s training data was fair use. While Udio admits it may have used some copyrighted recordings, its response to the suit doesn’t mention how they were acquired; if Udio bought those songs, under the Anthropic case’s reasoning, it might be off the hook.

But that’s not the only pertinent ruling. The very next day, in a case where authors alleged Meta had infringed on their copyright by training on their books, Judge Vince Chhabria directly addressed Alsup’s ruling, saying it was based on an “inept blockogy” and brushed aside “concerns about the harm it can inflict on the market for the works it gets trained on.” While Chhabria found in favor of Meta, he noted that it was because of bad lawyering on the part of the authors’ team.

Still, the finding is better for music companies on the input side, because it doesn’t draw a distinction around piracy, Grimmelman says. It is much, much worse for Suno and Udio on the output side. “Chhabria holds that ‘market dilution’ — creating lots of works that compete with the plaintiffs’ works — is a plausible theory of market harm,” he says in an email after the ruling. That’s also in line with the copyright office’s memo.

“We live in a world where everything is licensed.”

Suno and Udio have some other trouble; some generative AI companies have been licensing artists’ works. By offering nothing for works that other companies have licensed, they are messing up the market. “The fact that there are existing licensing deals for music training is relevant, if that market is better-developed than the market for licensing books,” Grimmelman says. Chhabria’s opinion points out that it’s quite difficult to license books for training, because the rights are so fragmented. “Either finding that there is a market that copyright owners should be able to exploit, or finding that there isn’t one, is circular, in that the court’s holding tends to reinforce its findings about the market.”

That effectively stacks the deck against Suno and Udio, and any other music companies that didn’t license their AI training data. Music licenses for AI training cost between $1 and $4 per track. High-quality datasets can cost from $1 to $5 per minute for non-exclusive licenses, and from $5 to $20 per minute for exclusive licenses. Transcription and emotion labeling, among other factors, garner higher prices.

And unlike in other industries, music already has an IP copyright and collection system, notes Kuok, of the BandLab recording app. The app has its own AI tool called SongStarter, which lets people who are making music begin with an AI-generated track. Kuok favors licensing music for AI training, and making sure musicians get paid.

“We live in a world where everything is licensed,” Kuok says. “The solution is an evolution of what existed before.” How to collect, who collects, and how much gets collected strikes Kuok as being open questions, but licensing itself is not. “We work in an all-rights-reserved world where we believe copyright is an important institution.”

“Everyone knew it was required.”

To address that, BandLab has options for its licensing program. Artists can say they are open to AI licensing, which means they’ll be contacted if a company wants to license their work. If they agree, their work is then bundled with an blockortment of other artists’ approved works for the licensing deal, which BandLab negotiates on their behalf. Kuok says Bandlab is discussing training deals now, though he declined to give specifics about the financial components of those deals, or who he was in talks with,

Kuok did say there were some other things he considers in negotiations. “It’s important what the use is for,” he says. “That has to be specified. These are fixed-term contracts, fairly large deals, worth six figures over a multiyear period.” He recommends maintaining as much control as possible over copyrighted work to avoid diluting the value of existing IP.

That may be why Suno and Udio are reportedly in talks with the majors to license music for training their models. Other AI companies do already. Ed Newton-Rex, formerly of Stability AI, told me all the music he’d worked with at Stability was licensed; he even quit his position as a vice president at Stability after the company decided training on copyrighted data was fair use. He’d been working on the systems since 2010, and licensing had been the norm until fairly recently, he told me.

“Everyone knew it was the law,” he says. “Everyone knew it was required.”

But after ChatGPT came out, some music AI companies thought they might also just grab whatever existed and let the courts sort it out. “I don’t think it’s fair use,” he says. “Given that gen AI generally competes with what it’s trained on, it’s a bad thing to take creators’ works and outcompete them.” Newton-Rex has also demonstrated ways to get Suno in particular to output music that’s strikingly similar to copyrighted work. That, too, is a problem.

“I don’t think there’s an outcome where this winds up being all fair use,” says Grimmelman.

[NEWS]

Source link

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *