Screw the money — Anthropic’s $1.5B copyright settlement sucks for writers

About half a million writers are eligible for a payment day of at least $ 3,000, thanks to a historic settlement of $ 1.5 billion in a Class Action right case that a group of authors has established anthropic.
This milestone settlement marks the biggest payment In the history of American copyright legislation, but this is not a victory for authors – it is another victory for technology companies.
Technical giants race to collect as much written material as possible to train their LLMS, those groundbreaking AI chat products such as Chatgpt and Claude, the same products that endanger the creative industry, even if their output is milquetoast. These AIs can become more advanced when they take more data, but after scraping the entire internet, these companies are literal save of new information.
That is why anthropic, the company behind Claude, millions of books from “Shadow libraries“And fed them in his AI. This specific lawsuit, Bartz v. Anthropic, is one of the dozens that have been submitted against companies such as Meta, Google, OpenAi and Midjourney about the legality of training AI on copyright works.
But writers do not get this scheme because their work was fed on an AI – this is just a precious blow to the wrist for anthropic, a company that has just collected $ 13 billion because it has downloaded illegally instead of buying them.
In June, federal judge William Alsup chose the side of anthropic and ruled that it is indeed legal to train AI on copyright protected material. The court argues that this use case is “transforming” enough to be protected by the fair use doctrine, a carve-out of copyright legislation has not been updated Since 1976.
“Like every reader who wants to be a writer, Anthropic’s LLMS has been trained to do not race and replicate or replace it – but to turn a hard angle and create something else,” the judge said.
It was the piracy – not the AI -training – that judge Alssup moved to bring the case to court, but with the regulation of Anthropic a trial is no longer necessary.
“Today’s regulation, if approved, will solve the remaining legacy claims of the claimants,” said Aparna Sridhar, deputy general counsel at Anthropic, in a statement. “We continue to commit ourselves to developing safe AI systems that help people and organizations to expand their capacities, promote scientific discovery and solve complex problems.”
As dozens of cases about the relationship between AI and copyright protected works go to court, judges now have Bartz v. Anthropic to refer as a precedent. But given the branches of these decisions, a different judge may come to a different conclusion.




