logo
Striking the right bargain for creativity and innovation

Striking the right bargain for creativity and innovation

Axiosa day ago
The rules of creativity and AI are being rewritten — but the ink isn't dry yet.
Generative AI is unlocking new ways to brainstorm, build and express ideas faster than ever. For creatives, that means more tools, more opportunity — and more reach.
The potential is massive: Artists can remix ideas, streamline workflows and collaborate in ways that weren't possible even a year ago.
Adobe believes the future will be one that amplifies human creativity — not replaces it.
The background: With AI's power to create realistic content in seconds, new questions are emerging around AI training data availability, attribution, credit and control. That's especially true when it comes to style imitation — a gray area in copyright law.
The challenge: Without the right guardrails, generative AI models trained on public images can replicate an artist's unique style, even without copying a specific work. Because copyright doesn't protect style, this opens the door for AI-generated content to closely imitate creators and compete with them in the marketplace.
The impact: The continued economic output of the creative industry hangs in the balance. The future success of AI to be able to continue innovating and learning also depends on creators being protected from bad actors using AI to misappropriate their style.
The goal: Ensure that creators can continue to produce works in their distinctive styles without the fear of confusion in the marketplace — and thrive in the process.
"Creativity is a fundamentally human trait that AI can never replace," says Jace Johnson, Adobe's vice president of government affairs and public policy. "But creators today have very limited ways to keep control or credit attached to any of their work." Adobe believes that this can change — with the help of industry and policymakers.
Why it's important: Copyright law wasn't built to anticipate the era of synthetic content. Legal clarity remains elusive, especially around training data.
"Fair use is going to be decided in the courts... and it's likely that AI systems are going to be allowed to train on some copyrighted works," Johnson says.
Without clearer rights or attribution, creators risk losing visibility — and control.
The strategy: Focus on the area where change is possible no matter how questions around training and fair use get resolved: AI outputs.
Johnson says policymakers can move fast to protect creators from the economic harm of similar AI-generated outputs. That's why Adobe supports legislation like the Preventing Abuse of Digital Replicas Act, which would protect artists' likeness and voice from being misused commercially.
It's also why Adobe advocates for a broader right to ensure that artists and creators don't have their signature styles copied using AI.
This shift in focus underpins Adobe's support of legislation that would curb unauthorized digital impersonation.
"This type of federal anti-impersonation right would give that artist a right they currently don't have to go into a courtroom and stop content that is too similar to their style if it is economically disadvantageous to them," explains Johnson.
An expert take: Before any policy can work, creators must be visible.
"You have to give them a way to protect their identity and get credit for their work first," Johnson says. "If there's no way to identify the artist, then it is really difficult to talk about a policy that may help them."
That's the principle behind Content Credentials, a digital provenance system (or "nutrition label" for digital content) that travels with an image, video or audio file. Creators can choose to attach Content Credentials to share information about their content like who made it, how, and whether AI was involved. Adobe has been advocating for widespread adoption of Content Credentials.
"We now, for the first time, offer them a way to get credit for their work, and express their intent with the work that they've created," Johnson says. "It's a great way for them to have some control over any specific pieces of content that they don't want going to train AI."
By restoring attribution and control, creators can reassert agency — and transform their work from a lost digital artifact into a living, protected asset.
"Everybody has a story to tell... and Content Credentials give life to that. It gives a voice to that person and to that story. And letting creatives be discoverable turns their art into an asset for them as opposed to an artifact," says Johnson.
For many, the AI era brings uncertainty — about ownership, credit and economic survival. Adobe believes that clearer rights, paired with transparency tools, can change that.
Because when AI mimics a creator's work without consent, it doesn't just threaten their identity, it undermines their income, reputation and role in the creative economy.
Looking ahead: Creativity isn't being replaced; it's being redefined and greatly expanded.
As generative tools reshape how we communicate and collaborate, Johnson sees a future where the next wave of innovation centers not on code, but on imagination.
"The next unicorns are likely to not be tech," says Johnson. "They're likely to be creatives."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Think your ChatGPT therapy sessions are private? Think again.
Think your ChatGPT therapy sessions are private? Think again.

Fast Company

time24 minutes ago

  • Fast Company

Think your ChatGPT therapy sessions are private? Think again.

If you've been confessing your deepest secrets to an AI chatbot, it might be time to reevaluate. With more people turning to AI for instant life coaching, tools like ChatGPT are sucking up massive amounts of personal information on their users. While that data stays private under ideal circumstances, it could be dredged up in court – a scenario that OpenAI CEO Sam Altman warned users in an appearance on Theo Von's popular podcast this week. 'One example that we've been thinking about a lot… people talk about the most personal shit in their lives to ChatGPT,' Altman said. 'Young people especially, use it as a therapist, as a life coach, 'I'm having these relationship problems, what should I do?' And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it, there's doctor patient confidentiality, there's legal confidentiality.' Altman says that as a society we 'haven't figured that out yet' for ChatGPT. Altman called for a policy framework for AI, though in reality OpenAI and its peers have lobbied for a regulatory light touch. 'If you go talk to ChatGPT about your most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman told Von, arguing that AI conversations should be treated with the same level of privacy as a chat with a therapist. While interactions with doctors and therapists are protected by federal privacy laws in the U.S., exceptions exist for instances in which someone is a threat to themselves or others. And even with those strong privacy protections, relevant medical information can be surfaced by court order, subpoena or a warrant. Altman's argument seems to be that from a regulatory perspective, ChatGPT shares more in common with licensed, trained specialists than it does with a search engine. 'I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,' he said. Altman also expressed concerns about how AI will adversely impact mental health, even as people seek its advice in lieu of the real thing. 'Another thing I'm afraid of… is just what this is going to mean for users' mental health. There's a lot of people that talk to ChatGPT all day long,' Altman said. 'There are these new AI companions that people talk to like they would a girlfriend or boyfriend. 'I don't think we know yet the ways in which [AI] is going to have those negative impacts, but I feel for sure it's going to have some, and we'll have to, I hope, we can learn to mitigate it quickly.'

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT
Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

CNET

time24 minutes ago

  • CNET

Even OpenAI's CEO Says Be Careful What You Share With ChatGPT

Maybe don't spill your deepest, darkest secrets with an AI chatbot. You don't have to take my word for it. Take it from the guy behind the most popular generative AI model on the market. Sam Altman, the CEO of ChatGPT maker OpenAI, raised the issue this week in an interview with host Theo Von on the This Past Weekend podcast. He suggested that your conversations with AI should have similar protections as those you have with your doctor or lawyer. At one point, Von said one reason he was hesitant to use some AI tools is because he "didn't know who's going to have" his personal information. "I think that makes sense," Altman said, "to really want the privacy clarity before you use it a lot, the legal clarity." More and more AI users are treating chatbots like their therapists, doctors or lawyers, and that's created a serious privacy problem for them. There are no confidentiality rules and the actual mechanics of what happens to those conversations are startlingly unclear. Of course, there are other problems with using AI as a therapist or confidant, like how bots can give terrible advice or how they can reinforce stereotypes or stigma. (My colleague Nelson Aguilar has compiled a list of the 11 things you should never do with ChatGPT and why.) Altman's clearly aware of the issues here, and seems at least a bit troubled by it. "People use it, young people especially, use it as a therapist, a life coach, I'm having these relationship problems, what should I do?" he said. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it." The question came up during a part of the conversation about whether there should be more rules or regulations around AI. Rules that stifle AI companies and the tech's development are unlikely to gain favor in Washington these days, as President Donald Trump's AI Action Plan released this week expressed a desire to regulate this technology less, not more. But rules to protect them might find favor. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Altman seemed most worried about a lack of legal protections for companies like his to keep them from being forced to turn over private conversations in lawsuits. OpenAI has objected to requests to retain user conversations during a lawsuit with the New York Times over copyright infringement and intellectual property issues. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) "If you go talk to ChatGPT about the most sensitive stuff and then there's a lawsuit or whatever, we could be required to produce that," Altman said. "I think that's very screwed up. I think we should have the same concept of privacy for your conversations with AI that you do with your therapist or whatever." Be careful what you tell AI about yourself For you, the issue isn't so much that OpenAI might have to turn your conversations over in a lawsuit. It's a question of whom you trust with your secrets. William Agnew, a researcher at Carnegie Mellon University who was part of a team that evaluated chatbots on their performance dealing with therapy-like questions, told me recently that privacy is a paramount issue when confiding in AI tools. The uncertainty around how models work -- and how your conversations are kept from appearing in other people's chats -- is reason enough to be hesitant. "Even if these companies are trying to be careful with your data, these models are well known to regurgitate information," Agnew said. If ChatGPT or another tool regurgitates information from your therapy session or from medical questions you asked, that could appear if your insurance company or someone else with an interest in your personal life asks the same tool about you. "People should really think about privacy more and just know that almost everything they tell these chatbots is not private," Agnew said. "It will be used in all sorts of ways."

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B
AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

TechCrunch

time24 minutes ago

  • TechCrunch

AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B

AI referrals to websites still have a way to go to catch up to the traffic that Google Search provides, but they're growing quickly. According to new data from market intelligence provider Similarweb, AI platforms in June generated over 1.13 billion referrals to the top 1,000 websites globally, a figure that's up 357% since June 2024. However, Google Search still accounts for the majority of traffic to these sites, accounting for 191 billion referrals during the same period of June 2025. One particular category of interest these days is news and media. Online publishers are seeing traffic declines and are preparing for a day they're calling 'Google Zero,' when Google stops sending traffic to websites. For instance, The Wall Street Journal recently reported on data that showed how AI overviews were killing traffic to news sites. Plus, a Pew Research Center study out this week found that in a survey of 900 U.S. Google users, 18% of some 69,000 searches showed AI Overviews, which led to users clicking links 8% of the time. When there was no AI summary, users clicked links nearly twice as much, or 15% of the time. Similarweb found that June's AI referrals to news and media websites were up 770% since June 2024. Some sites will naturally rank higher than others that are blocking access to AI platforms, as The New York Times does, as a result of its lawsuit with OpenAI over the use of its articles to train its models. In the news media category, Yahoo led with 2.3 million AI referrals in June 2025, followed by Yahoo Japan (1.9M), Reuters (1.8M), The Guardian (1.7M), India Times (1.2M), and Business Insider (1.0M). In terms of methodology, Similarweb counts AI referrals as web referrals to a domain from an AI platform like ChatGPT, Gemini, DeepSeek, Grok, Perplexity, Claude, and Liner. ChatGPT dominates here, accounting for more than 80% of the AI referrals to the top 1,000 domains. The company's analysis also looked at other categories beyond news, like e-commerce, science and education, tech/search/social media, arts and entertainment, business, and others. Screenshot In e-commerce, Amazon was followed by Etsy and eBay when it came to those sites seeing the most referrals, at 4.5M, 2.0M, and 1.8M, respectively, during June. Among the top tech and social sites, Google, not surprisingly, was at the top of the list, with 53.1 million referrals in June, followed by Reddit (11.1M), Facebook (11.0M), Github (7.4M), Microsoft (5.1M), Canva (5.0M), Instagram (4.7M), LinkedIn (4.4M), Bing (3.1M), and Pinterest (2.5M). The analysis excluded the OpenAI website because so many of its referrals were from ChatGPT, pointing to its services. Across all other domains, the No. 1 site by AI referrals for each category included YouTube (31.2M), Research Gate (3.6M), Zillow (776.2K), (992.9K), Wikipedia (10.8M), (5.2M), (1.2M), Home Depot (1.2M), Kayak (456.5K), and Zara (325.6K).

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store