
Bayesian superyacht lifted off seabed – DW – 06/21/2025
A British-flagged luxury superyacht that sank off Sicily last year, killing British tech magnate Mike Lynch and six others, was lifted from the water on Saturday.
Salvage recovery crews completed the complex operation to bring the Bayesian yacht ashore for further investigation.
One of Europe's most powerful maritime cranes hauled the 56-meter-long (184-foot) luxury yacht from beneath the waves during the day.
The salvage team, led by British company TMC Marine, pumped seawater out of the hull and the vessel was held in an elevated position, surrounded by pollution containment booms, while further checks were carried out.
The Bayesian's upper decks appeared badly damaged while the blue hull was encrusted with mud after it had sat on the seabed at a depth of 50 meters
"The hull of the superyacht Bayesian has today been successfully and safely recovered from the sea off the coast of northern Sicily," said TMC Maritime. "This follows a delicate lifting procedure that began early today."
The vessel has been slowly raised from the seabed, 50 meters below the surface, over the past three days to allow the steel lifting straps, slings and harnesses to be secured under the keel.
Eight steel lifting straps were used to put the hull upright and to form part of a steel wire lifting system that began raising the vessel out of the water on Saturday.
The Bayesian was missing its 72-meter (236-foot) mast, which was cut off and left on the seabed for future removal. The mast had to be detached to allow the hull to be brought to a nearly upright position that would allow the craft to be raised.
The floating crane platform will now move the Bayesian to the Sicilian port of Termini Imerese on Sunday, where a special steel cradle is waiting for it.
The vessel will then be made available for investigators to help determine the cause of the sinking.
The Bayesian sank on August 19 off Porticello, near Palermo, during a violent storm as Lynch was treating friends to a cruise to celebrate his acquittal two months earlier in the United States on fraud charges.
The 59-year-old sold Autonomy, a software maker he founded in 1996, to Hewlett-Packard for $11 billion in 2011, and was acquitted of fraud charges in June 2024 by a federal court jury in San Francisco.
Lynch, his daughter and five others died while fifteen people survived, including the captain and all crew members except the chef.
Italian authorities are continuing to conduct a full criminal investigation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
4 hours ago
- Int'l Business Times
Ten Years After Brazil Mine Disaster, Pollution Persists
A decade after a dam collapsed in Brazil, sending a deluge of toxic mud into villages and waterways, residents complain of an inadequate cleanup and compensation by international mining firms. The 2015 dam collapse which killed 19 people was one of Brazil's worst environmental disasters, with survivors saying the Doce River region north of Rio de Janeiro remains heavily polluted. "The entire ecosystem around the river was destroyed," Marcelo Krenak, a leader of the Krenak Indigenous people, told AFP on the sidelines of a hearing in London held this week. The hearing is part of a large-scale legal action brought by claimants seeking compensation from Australian mining giant BHP -- which, at the time of the disaster, had one of its global headquarters in the UK. "My people, the culture, has always been linked to the river," Krenak said, wearing a traditional headdress with striking blue feathers. "The medicinal plants that only existed in the river are contaminated, the soil is contaminated, so you cannot plant, you cannot use the river water for animals," he added. Following a mega-trial that concluded in March, the claimants now await a decision from the British High Court in the coming weeks regarding BHP's liability for the disaster. The Fundao tailings dam at an iron ore mine in Minas Gerais state was managed by Samarco, co-owned by BHP and Brazilian miner Vale. The High Court is already preparing the second phase of the case to determine potential damages and compensation, which could begin in October 2026 if BHP is found liable. The company told AFP that "the recovery of the Doce River, the water quality of which had already returned to pre-dam failure levels, remained a focus". Acknowledging the "terrible tragedy", BHP said it is "committed to supporting Samarco to do what's right by the Brazilian people, communities, organisations, and environments affected by the dam failure". BHP maintains that the compensation agreement it reached last year in Brazil -- worth around $31 billion -- provides a resolution. However, a majority of the 620,000 claimants, including 46 municipalities, argue that they are not sufficiently covered by the deal and are instead seeking around GBP35 billion ($49 billion) in damages. Krenak said the claimants will at a potential future hearing present "visual evidence, photos and videos of what was done, what caused it, and the damage it is causing to this day". The city of Mariana, one of the areas hardest hit by the disaster, is seeking 28 billion Brazilian real ($5 billion) in compensation. "Our hope is that here in London, the municipality will be heard because, in Brazil, we were not heard," Mayor of Mariana, Juliano Duarte, told AFP. Duarte said he believes the British legal system will hold BHP accountable, which could pressure the company to negotiate directly with the claimants. He said the municipality is "open" to negotiation but "will not accept crumbs like those that were offered in Brazil". The city of Mariana, one of the areas hardest hit by the disaster, which is seeking around $5 billion in compensation AFP The 2015 dam collapse killed 19 people and unleashed a deluge of thick toxic mud into villages, fields, rainforest, rivers and the ocean AFP


DW
a day ago
- DW
Fact check: How to spot AI-generated newscasts – DW – 07/02/2025
AI-generated newscasts are getting harder to spot — and they're flooding your feed. Here's how to avoid falling for the fakes. On TikTok, a reporter stands in front of a traditional red Royal Mail pillar box, with British flags fluttering in the background and a microphone in hand. He asks a female passerby who she plans to vote for in the upcoming election. "Reform," the woman replies. "I just want to feel British again, innit." A user comments below: "I wonder how much they paid her to say that." But this scene never happened. The interview is entirely fake. The reporter doesn't exist — he was generated by artificial intelligence. And if you look closely, there's a subtle clue: a faint watermark in the corner bearing the word "Veo," the signature of Google DeepMind's powerful new video-generation tool. This 8-second video isn't an isolated case. From TikTok to Telegram, synthetic newscasts — AI-generated videos that mimic the look and feel of real news segments — are flooding social feeds. They borrow the visual language of journalism: field reporting, on-screen graphics, authoritative delivery. However, they're often completely fabricated, designed to provoke outrage, manipulate opinion, or simply go viral. Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics, explains the dangers of these AI-generated news clips in an interview with DW. "If you're scrolling fast on social media, it looks like news. It sounds like news," said Hany Farid, a digital forensics expert and professor at UC Berkeley, in an interview with DW. "And that's the danger." Many synthetic videos blur the line between satire and reality, or are simply misleading. In another example(it is an 8-second clip too), a reporter appears to describe an "unprecedented military convoy" moving through central London. She stands in front of a tank as a crowd looks on. Yet the video does not refer to any specific event, time, or context. DW Fact check has repeatedly observed how such clips resurface during times of crisis — like riots or major news events — repurposed to sow confusion or falsely suggest dramatic escalations. During the latest conflict escalation between Israel and Iran, TikTok and other platforms were swarmed with AI-generated content about the incident, including fake newscasts making false claims — such as Russia allegedly joining the war, Iran attacking the US, or Iran shooting down US B-2 bombers used in strikes on Iran's nuclear facilities. DW also observed a surge in synthetic news clips following the outbreak of protests in Los Angelesin June. The consequences extend far beyond social media. In 2024, Taiwanese researchers flagged AI-generated newscasts on local platforms that falsely accused pro-sovereignty politicians of corruption. The clips didn't just spread misinformation — they seeded distrust, undermining the credibility of all news outlets ahead of the country's elections. But some users turn to AI newscasts for parody or comic effect. One viral TikTokshows a synthetic anchor reporting in front of a pothole so deep that motorcycles vanish into it. Another has an avatar declaring, "I'm currently at the border but there is no war. Mom, Dad, this looks real — but it's all AI." To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video So, how can you tell what's real? Start with watermarks. Tools like Veo, Synthesia, and others often brand their videos, though the labels are sometimes faint, cropped out, or ignored. Even clearly marked clips are frequently flooded with comments asking, "Is this real?" Fake newscasts are among the most polished AI-generated content. Because they often depict static news studio environments, typical AI giveaways — like awkward hand movements or inconsistent backgrounds — are harder to spot. But subtle clues remain. Watch the eyes and mouth. Synthetic avatars often blink unnaturally or struggle with realistic lip-syncing. Teeth may appear too smooth or unnaturally shiny. Their shape might even shift mid-sentence. Gestures and facial movements tend to be overly uniform, lacking the natural variation of real humans. Text can also be a giveaway. On-screen captions or banners often contain nonsensical phrases or typographical errors. In one example, a supposed "breaking news" chyron read: " The reporter's microphone was labeled "The INFO Misisery." As Farid explained, the challenge of spotting synthetic content is "a hard problem" — and a moving target. "Whatever I tell you today about how to detect AI fakes might not be relevant in six months," he said. So what can you do? Stick with trusted sources. "If you don't want to be lied to," Farid said, "go to reliable news organizations." The concept of AI presenters isn't new. In 2018, China's state-run Xinhua News Agency introduced a stilted, robotic AI anchor. At the time, it was more curiosity than threat. But the technology has evolved dramatically. Tools like Veo now let virtually anyone — with no media training — create polished, broadcast-style videos for just a few hundred euros a month. The avatars speak fluidly, move realistically, and can be dropped into almost any scene with a few typed prompts. "The barrier to entry is practically gone," said Farid. "You don't need a studio. You don't even need facts." Most of these clips are engineered for maximum engagement. They tap into highly polarizing topics: immigration, the war in Gaza, Ukraine, and Donald Trump, to provoke strong emotional reactions and encourage sharing. Social media platforms often reward this content. Meta, for instance, recently adjusted its algorithm to surface more posts from accounts users don't follow, making it easier for synthetic videos to reach broad, unsuspecting audiences. Monetization programs further incentivize creators: the more views a video racks up, the more money it can generate. This environment has given rise to a new breed of "AI slop" creators: users who churn out low-quality synthetic content tied to trending topics just to grab views. Accounts like this one— with about 44,000 followers — often jump on breaking news before journalists can confirm the facts. During a recent airline crash, dozens of TikTok videos featured AI avatars dressed as CNN or BBC reporters, broadcasting fake casualty numbers and fabricated eyewitness accounts. Many remained online for hours before being taken down. In moments of breaking news — when users are actively seeking information — realistic-looking AI content becomes an especially effective way to attract clicks and cash in on public attention. "Platforms have moved away from content moderation," Farid told DW. "It's a perfect storm: I can generate the content, I can distribute it, and there are audiences willing to believe it." To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video


DW
a day ago
- DW
India accuses Prada of cultural appropriation over sandals – DW – 07/02/2025
The Kolhapuri chappals inspired a new Prada design. The luxury brand only acknowledged this fact after critics accused it of replicating the iconic sandals without recognizing the footwear's cultural roots. The open-toe footwear showcased last week at Milan Men's Fashion Week was simply described as "leather sandals." But those flat leather sandals sparked controversy among Indian fashion critics, craftspeople and politicians, who noted that the design had been stolen from traditional Kolhapuri chappals — sandals named after the town of Kolhapur in Maharashtra, western India. The handcrafted footwear with an intricate interwoven pattern dates back to the 12th century. For now, the sandals are not yet available on the market, but as luxury fashion brand items, they could typically sell at over €1,000 ($1,200) a pair. Meanwhile, authentic Kolhapuri sandals can be found in local markets for about €10 ($12). Following a social media outcry, the Maharashtra Chamber of Commerce called on Prada to recognize the Indian roots of the design. Reacting to the growing accusations of cultural appropriation, Lorenzo Bertelli, Prada's head of corporate social responsibility and son of the company's owners, acknowledged the Indian roots of the design in a letter sent to the chamber of commerce: "We acknowledge that the sandals are inspired by traditional Indian handcrafted footwear, which carries a rich cultural legacy," wrote Bertelli in the letter, according to news agency Reuters. He added that the sandals are still in the early design phase and may not reach the market, but that Prada is "committed to responsible design practices, fostering cultural engagement, and opening a dialogue for a meaningful exchange with local Indian artisan communities, as we have done in the past in other collections to ensure the rightful recognition of their craft." Designers are regularly accused of cultural appropriation and of failing to recognize their sources of inspiration. Already in 2014, British fashion designer Paul Smith came under fire for his smooth, jet-black leather sandals advertised as "Robert." Pakistanis recognized the design as reproducing the traditional Peshawari (or Charsadda) chappal — except the Paul Smith model sold for 20 times the cost of the same chappal in Pakistan if bought from an upscale store. Following social media criticism and an online petition, the designer quickly added in the description of the footwear that it had been "inspired by the Peshawari Chappal." The Maharashtra Chamber of Commerce has now decided to have the Kolhapuri chappals patented to avoid any global copyright infringement in the future. Known for their durability, the traditional flat sandals are already protected within the country by a geographical indication (GI) tag. The indication lists goods whose reputation is attributable to their geographic origin, and copying the design of these items for commercial gain without authorization or sharing benefits is illegal — at least within the country. As of 2024, there were 603 GI-registered products in India. Al Jazeera has also reported that a member of parliament from the state's Kolhapur district, Dhananjay Mahadik, who belongs to the governing Bharatiya Janata Party (BJP), is supporting sandal makers who are filing a lawsuit in the Bombay High Court against Prada. But despite political support in the Prada controversy, the sandal makers' leather supply has been affected by government politics surrounding cows, which are considered sacred by Hindus. After Narendra Modi's BJP came to power in 2014, Hindu nationalist extremists felt emboldened to attack people transporting cows for trade and slaughter. The victims of these so-called cow vigilantes are overwhelmingly Dalits — traditionally the most marginalized of India's castes — and Muslims. To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Despite their historical marginalization, artisans of the Dalit community are the ones who mastered the intricate weaving and design techniques of the iconic chappals. They have passed down these skills through generations. The Dalit Voice, a human rights group campaigning against discriminatory practices based on caste, race, gender, occupation and descent, pointed out in an Instagram post that the Kolhapuri chappals are more than just fashion — they're "a legacy of Dalit craftsmanship and resilience." "They are history, identity and resistance," added the Dalit Voice. "Respect the roots."