logo
Diddy verdict: A trial on fame, consent and #MeToo – DW – 07/02/2025

Diddy verdict: A trial on fame, consent and #MeToo – DW – 07/02/2025

DW02-07-2025
The Sean "Diddy" Combs' trial tested the limits of fame, power, consent and of the #MeToo movement in an era of AI misinformation and manosphere backlash.
Despite not being televised like some other infamous trials involving famous people in the US, the trial of Sean "Diddy" Combs nevertheless kept people riveted — perhaps due to the scurrilous allegations he'd faced.
On July 2, the 55-year-old — once a titan of the 1990s and 2000s hip-hop scene — was found guilty on two counts of transportation to engage in prostitution but was acquitted of the most serious charges of racketeering and sex trafficking.
His seven-week trial began on May 12, where a jury of 12 heard testimonies of 34 witnesses, including ex-girlfriends, former employees of Combs, male escorts and federal agents.
At the time of writing, lawyers of the Bad Boy Records founder were working to get him released on bail.
Different parties also weighed in throughout the trial: legacy media, TikTokers, YouTubers, influencers, even AI (steered, as yet, by humans looking to make a quick buck).
Thus, the trial of the United States of America v. Sean Combs, a/k/a "Puff Daddy," a/k/a "P. Diddy," a/k/a "Diddy," a/k/a "PD," a/k/a "Love" (the case's full name) was't just about an entertainment mogul charged with serious federal and sexual offences.
It has also trained the spotlight on issues of sexual consent, power imbalances and "truth" in diverse echo chambers.
According to the 17-page indictment against Combs, he "abused, threatened and coerced women and others around him to fulfill his sexual desires, protect his reputation and conceal his conduct."
Prosecutors alleged Combs used his wealth and influence to coerce two girlfriends to take part in drug-fueled, days-long sexual performances, also known as "freak offs."
Cassandra "Cassie" Ventura, a singer and Combs' ex-partner of 11 years, testified that she was manipulated and assaulted during their relationship. A 2016 video showing Combs attacking Ventura in a hotel hallway was presented in court, prompting Assistant US Attorney Christy Slavik to describe Combs as "a leader of a criminal enterprise who doesn't take no for an answer."
Yet Combs' legal team countered that the relationships and parties were consensual. Yes, they said, his "swinger lifestyle" was unconventional — but not illegal. Instead, they accused Ventura and others of financial motives, citing her $20 million settlement in 2023 besides other alleged hush money agreements.
This "Why didn't she just leave?" stance also ignited debate among online communities and advocacy groups.
Some media figures associated with the "manosphere" — a loosely defined network of male-centric commentators — had voiced skepticism about Ventura's allegations, suggesting instead that she was looking to cash in.
For instance, YouTuber Greg Adams said on his Free Agent Lifestyle channel, "There's no accountability on her part. Everything is: 'My brain still ain't developed, he slipped me a drug, he tricked me,' when it should've been: 'I was 21, Diddy was a damn near millionaire kabillionaire and I was upgrading.'"
Speaking to ABC News, Carolyn West, a professor of clinical psychology at the University of Washington, said that perpetrators may psychologically manipulate their partners into thinking that they're overexaggerating the abuse being experienced.
She added that where the perpetrator is someone with a high profile, the survivor may be fearful of leaving because they may be seen as less credible than their abuser, and the high-profile person may have money or power that they can use to control their abused partner, including coercing them to remain in the relationship against their will.
The Combs trial also coincided with the rape and sexual abuse retrial of filmmaker Harvey Weinstein, against whom sexual abuse allegations were made in October 2017 that sparked the #MeToo movement.
Some have since questioned the movement's efficacy, citing the outcome of the Johnny Depp-Amber Heard trial or the fact that Donald Trump was found liable for defaming and sexually abusing writer E. Jean Carroll over an alleged incident in the 1990s.
Speaking at the sidelines of the Combs case, Gloria Allred, who has represented clients in the Weinstein case, told ABC News: "People keep saying to me the #MeToo movement is dead. There's no evidence of that… It's alive and well."
The non-televised trial of Combs has also provided fodder for influencers and YouTubers. Alongside traditional media, they have been livestreaming their takes on the case from the Manhattan federal courthouse.
According to a 2024 Pew Research Center study, one in five Americans get news from influencers online; for people under 30, the share is 37%.
Speaking to news agency AFP, Reece Peck, a professor of political communication and journalism at the City University of New York, called the competition among content creators "Darwinian."
"They're so scared of losing their clientele or their audience. And so with that logic, that you have to constantly create content, the news cycle is such an attractive source of material," adding that Combs' trial is a fount: "It's sex, it's violence and it's celebrity."
AI-generated misinformation — often called "AI slop" — also surged around the Diddy trial, as nearly two dozen anonymous YouTube channels published around 900 videos using fabricated thumbnails, fake celebrity quotes and deepfakes related to the case.
A report by and found these clips amassed roughly 70 million views, often falsely claiming singers like Justin Bieber or Jay-Z "testified" or made shocking revelations.
One creator admitted that launching a "Diddy channel" was "probably the quickest route to making $50,000," underscoring how such content is monetized.
Although YouTube removed or demonetized several channels, sensational, low-quality AI media has the potential to cloud people's understanding of trials, and maybe even overshadow actual judicial proceedings.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AI-powered 'Nudify' Apps Fuel Deadly Wave Of Digital Blackmail
AI-powered 'Nudify' Apps Fuel Deadly Wave Of Digital Blackmail

Int'l Business Times

time17-07-2025

  • Int'l Business Times

AI-powered 'Nudify' Apps Fuel Deadly Wave Of Digital Blackmail

After a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding $3,000 to suppress an AI-generated nude image of him. The tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps -- AI tools that digitally strip off clothing or generate sexualized imagery. Elijah Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail, which has spurred calls for more action from tech platforms and regulators. His parents told US media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends. "The people that are after our children are well organized," John Burnett, the boy's father, said in a CBS News interview. "They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child." US investigators were looking into the case, which comes as nudify apps -- which rose to prominence targeting celebrities -- are being increasingly weaponized against children. The FBI has reported a "horrific increase" in sextortion cases targeting US minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides," the agency warned. In a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six percent of American teens have been a direct victim of deepfake nudes. "Reports of fakes and deepfakes -- many of which are generated using these 'nudifying' services -- seem to be closely linked with reports of financial sextortion, or blackmail with sexually explicit images," the British watchdog Internet Watch Foundation (IWF) said in a report last year. "Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful -- maybe even as harmful as real images in some cases -- can be produced using generative AI." The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old girls. The tools are a lucrative business. A new analysis of 85 websites selling nudify services found they may be collectively worth up to $36 million a year. The analysis from Indicator, a US publication investigating digital deception, estimates that 18 of the sites made between $2.6 million and $18.4 million over the six months to May. Most of the sites rely on tech infrastructure from Google, Amazon, and Cloudflare to operate, and remain profitable despite crackdowns by platforms and regulators, Indicator said. The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers created sexualized images of their own classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Earlier this year, Spanish prosecutors said they were investigating three minors in the town of Puertollano for allegedly targeting their classmates and teachers with AI-generated pornographic content and distributing it in their school. In the United Kingdom, the government this year made creating sexually explicit deepfakes a criminal offense, with perpetrators facing up to two years in jail. And in May, US President Donald Trump signed the bipartisan "Take It Down Act," which criminalizes the non-consensual publication of intimate images, while also mandating their removal from online platforms. Meta also recently announced it was filing a lawsuit against a Hong Kong company behind a nudify app called Crush AI, which it said repeatedly circumvented the tech giant's rules to post ads on its platforms. But despite such measures, researchers say AI nudifying sites remain resilient. "To date, the fight against AI nudifiers has been a game of whack-a-mole," Indicator said, calling the apps and sites "persistent and malicious adversaries."

Humanoid Robot Says Not Aiming To 'Replace Human Artists'
Humanoid Robot Says Not Aiming To 'Replace Human Artists'

Int'l Business Times

time10-07-2025

  • Int'l Business Times

Humanoid Robot Says Not Aiming To 'Replace Human Artists'

When successful artist Ai-Da unveiled a new portrait of King Charles this week, the humanoid robot described what inspired the layered and complex piece, and insisted it had no plans to "replace" humans. The ultra-realistic robot, one of the most advanced in the world, is designed to resemble a human woman with an expressive, life-like face, large hazel eyes and brown hair cut in a bob. The arms though are unmistakably robotic, with exposed metal, and can be swapped out depending on the art form it is practicing. Late last year, Ai-Da's portrait of English mathematician Alan Turing became the first artwork by a humanoid robot to be sold at auction, fetching over $1 million. But as Ai-Da unveiled its latest creation -- an oil painting entitled "Algorithm King", conceived using artificial intelligence -- the humanoid insisted the work's importance could not be measured in money. "The value of my artwork is to serve as a catalyst for discussions that explore ethical dimensions to new technologies," the robot told AFP at Britain's diplomatic mission in Geneva, where the new portrait of King Charles will be housed. The idea, Ai-Da insisted in a slow, deliberate cadence, was to "foster critical thinking and encourage responsible innovation for more equitable and sustainable futures". Speaking on the sidelines of the United Nations' AI for Good summit, Ai-Da, who has done sketches, paintings and sculptures, detailed the methods and inspiration behind the work. "When creating my art, I use a variety of AI algorithms," the robot said. "I start with a basic idea or concept that I want to explore, and I think about the purpose of the art. What will it say?" The humanoid pointed out that "King Charles has used his platform to raise awareness on environmental conservation and interfaith dialog. I have aimed this portrait to celebrate" that, it said, adding that "I hope King Charles will be appreciative of my efforts". Aidan Meller, a specialist in modern and contemporary art, led the team that created Ai-Da in 2019 with artificial intelligence specialists at the universities of Oxford and Birmingham. He told AFP that he had conceived the humanoid robot -- named after the world's first computer programmer Ada Lovelace -- as an ethical arts project, and not "to replace the painters". Ai-Da agreed. There is "no doubt that AI is changing our world, (including) the art world and forms of human creative expression", the robot acknowledged. But "I do not believe AI or my artwork will replace human artists". Instead, Ai-Da said, the aim was "to inspire viewers to think about how we use AI positively, while remaining conscious of its risks and limitations". Asked if a painting made by a machine could really be considered art, the robot insisted that "my artwork is unique and creative". "Whether humans decide it is art is an important and interesting point of conversation." The world's first ultra-realistic AI robot artist, Ai-Da, who can draw and paint, is pictured alongside her self-portrait AFP

'We're AI,' Popular Indie Rock Band Admits
'We're AI,' Popular Indie Rock Band Admits

Int'l Business Times

time08-07-2025

  • Int'l Business Times

'We're AI,' Popular Indie Rock Band Admits

An indie rock band with more than a million monthly listeners on Spotify has owned up to being an AI-generated music project following days of speculation about whether the group was real. Named Velvet Sundown -- seemingly a nod to Lou Reed's band The Velvet Underground -- the digital group has become a viral hit, generating ferocious online discussion after racking up hundreds of thousands of listens. An updated Spotify profile, consulted on Tuesday by AFP, admitted that the group was an "ongoing artistic provocation". "All characters, stories, music, voices and lyrics are original creations generated with the assistance of artificial intelligence tools employed as creative instruments," Velvet Sundown's profile added. Recently created social media profiles, featuring photos of the group that look suspiciously fake, have teased readers about the group's origins, offering often contradictory information. Experts have long warned about the dangers of AI-image, video and music generators blurring the lines between the real and fake. A major study in December by the International Confederation of Societies of Authors and Composers (CISAC), which represents more than five million creators worldwide, warned about the danger of AI-generated music. It forecast that artists could see their incomes shrink by more than 20 percent in the next four years as the market for AI-composed music grows. Stockholm-based streamer Spotify declined to comment directly about Velvet Sundown when contacted by AFP. Spokeswoman Geraldine Igou wrote that the platform does not "prioritise or benefit financially from music created using AI tools". "All tracks are created, owned, and uploaded by licensed third parties," Igou insisted. Rival music streaming service Deezer displayed a warning for "AI-generated content" for Velvet Sundown. "Some tracks on this album may have been created using artificial intelligence," it said. The Spotify rival has an AI-music detection tool that is able to identify songs generated using popular software models such as Suno and Udio. Deezer said in April that it was receiving more than 20,000 fully AI-generated tracks on a daily basis, comprising 18 percent of all uploaded content, an increase from the previously reported 10 percent in January. Reports on Tuesday said an imposter posing as US Secretary of State Marco Rubio had been using AI-generated voice and text messages to high-level officials and foreign ministers.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store