Adobe tells Congress to give artists the right to block AI from training on their creative works
Adobe signaled support for mandating new rights to protect artists from AI
A senior representative from Adobe told Congress on Wednesday that the company supports giving artists the right to stop artificial intelligence systems from being trained on their creative works and indicated it would back a new law from Congress on this issue.
At a Senate Judiciary Committee subcommittee hearing on the link between AI and intellectual property rights, Adobe General Counsel and Chief Trust Officer Dana Rao told senators that allowing AI systems to train on large sets of data is critical to ensuring accurate AI outputs.
But Rao said Adobe, the imaging and design software giant, recognizes that limits must be put in place to protect creators. For that reason, he said Adobe trained its own generative AI system, Firefly, on licensed images from the company’s stock photo library and other openly licensed content.
SENATORS LEAVE CLASSIFIED AI BRIEFING CONFIDENT BUT WARY OF ‘EXISTENTIAL’ THREAT POSED BY CHINA
"This approach supports creators and customers by training on the data set that is designed to be commercially safe," he said, adding that government has a role to play in ensuring that other companies respect creative works in the same way.
"We believe creators should be able to attach a ‘do not train’ tag to their work," Rao said. "With industry and government support, we can ensure AI data callers read and respect this tag, giving creators the option to keep their data out of AI training data sets."
Rao was asked by subcommittee Chairman Chris Coons, D-Del., about whether Congress should step in and mandate an option so artists can ensure their works cannot be used by AI systems for data training.
"I do think that there’s an opportunity for Congress to mandate the carrying of a tag like that, a credential like that, wherever the content goes," he said.
Rao went further by saying Congress should create federal law that prevents AI systems from unfairly impersonating artists’ styles.
HOW AI HAS SHAPED VITAL NATO ALLY’S PRESIDENTIAL ELECTION
"We believe artists should be protected against this type of economic harm, and we propose Congress establish a new federal anti-impersonation right that would give artists the right to enforce against someone intentionally attempting to impersonate their style or likeness," he said.
"Holding people accountable who misuse AI tools is a solution we believe goes to the heart of some of the issues our customers have, and this new right would help address that concern," Rao added.
Lawmakers also heard from an artist on Wednesday who has worked on feature films, who said artists could use new rules related to copyright protection from AI systems.
"I have never worried about my future as an artist until now," Karla Ortiz told the subcommittee. "Generative AI is unlike any other technology that has come before. It is a technology that uniquely consumes and exploits the hard work, creativity and innovation of others."
GENERATIVE AI TOOLS LEAD TO RISING DEEPFAKE FRAUD
"I found that almost the entirety of my work, the work of almost every artist I know, and the work of hundreds of thousands of artists had been taken without our consent, credit or compensation," she said. "These works were stolen and used to train for-profit technologies with data sets that contain billions of image and text data pairs."
GET FOX BUSINESS ON THE GO BY CLICKING HERE
Members of Congress are talking about a range of ideas on how to regulate AI, but no major legislation has gotten off the ground yet. Earlier in the year, there was talk of setting up a new federal agency to handle AI, or at least a commission, but neither of those ideas have gone anywhere.