
Search ops resume at Gujarat bridge collapse site to retrieve missing man
The toll in the tragedy rose to 20 on Friday after one more body was recovered from the river and another injured person died at the hospital while undergoing treatment.
One more person is still missing, and efforts to find him resumed on Saturday, Vadodara collector Anil Dhameliya said.
Several vehicles plunged into the river after a segment of the 40-year-old bridge near Gambhira village, connecting Anand and Vadodara districts, collapsed on Wednesday morning.
The collector said another focus of the operation on Saturday will be to remove a large chunk of slab that has fallen into the river.
"In the next stage of our operation , we will take the help of a technical team to remove the main slab and retrieve the body of the missing person. The Gujarat Pollution Control Board will be roped in to safely extricate the tanker carrying sulphuric acid that has fallen into the river," Dhameliya said.
Citing a preliminary probe report, a state minister had on Friday said the collapse was caused by the "crushing of pedestal and articulation joints".
Rescuers have been working amid challenging conditions such as 3.5-metre-deep mud, release of soda ash in water and the presence of a tanker containing sulphuric acid.
The National Disaster Response Force , State Disaster Response Force and other agencies are part of the rescue efforts.
Chief Minister Bhupendra Patel has suspended four engineers of the state's Roads and Buildings Department in connection with the bridge collapse.
Minister Rushikesh Patel, who visited the site of the tragedy on Friday, said that the action was taken based on a preliminary report submitted by a committee set up by the chief minister.
A high-level probe committee of the state's roads and buildings department will submit a detailed report in 30 days, he said.
Out of the 7,000 bridges in the state that have been surveyed, the government has identified those that need repairs or require the construction of new ones. Action is taken on them accordingly, he said after the visit.
Gujarat has witnessed six major incidents of bridge collapse since 2021.
In December 2021, a slab collapsed during the construction of the Mumatpura flyover on the outskirts of Ahmedabad city. Nobody was injured in the incident.
In October 2022, as many as 135 persons were killed when a British-era suspension bridge over the Machchhu River in Morbi town collapsed.
In June 2023, a newly built bridge on the Mindhola River in the Tapi district collapsed. No one was hurt in the accident.
In September 2023, four persons were injured after a portion of an old bridge on the Bhogavo River in Surendranagar district collapsed when a 40-tonne dumper was navigating it near Wadhwan city.
In October 2023, two persons sitting in an autorickshaw died after six concrete girders or slabs, which were installed on the pillars of an under-construction bridge near the RTO Circle in Palanpur town of Banaskantha, collapsed.
In August 2024, a small bridge on the Bhogavo River connecting Habiyasar village with Chotila town in Surendranagar district collapsed after a sudden rise in water following discharge from an overflowing dam. No casualties were reported in the incident.
This article was generated from an automated news agency feed without modifications to text.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
2 hours ago
- The Hindu
AI-powered 'nudify' apps fuel deadly wave of digital blackmail
After a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding $3,000 to suppress an AI-generated nude image of him. The tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps: AI tools that digitally strip off clothing or generate sexualised imagery. Elijah Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail, which has spurred calls for more action from tech platforms and regulators. His parents told U.S. media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends. "The people that are after our children are well organised," John Burnett, the boy's father, said in a CBS News interview. "They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child." U.S. investigators were looking into the case, which comes as nudify apps, which rose to prominence targeting celebrities, are being increasingly weaponised against children. The FBI has reported a "horrific increase" in sextortion cases targeting U.S. minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides," the agency warned. In a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six percent of American teens have been a direct victim of deepfake nudes. "Reports of fakes and deepfakes - many of which are generated using these 'nudifying' services, seem to be closely linked with reports of financial sextortion, or blackmail with sexually explicit images," the British watchdog Internet Watch Foundation (IWF) said in a report last year. "Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful - maybe even as harmful as real images in some cases - can be produced using generative AI." The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old girls. The tools are a lucrative business. A new analysis of 85 websites selling nudify services found they may be collectively worth up to $36 million a year. The analysis from Indicator, a U.S. publication investigating digital deception, estimates that 18 of the sites made between $2.6 million and $18.4 million over the six months to May. Most of the sites rely on tech infrastructure from Google, Amazon, and Cloudflare to operate, and remain profitable despite crackdowns by platforms and regulators, Indicator said. The proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers created sexualised images of their own classmates. A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their consent. Earlier this year, Spanish prosecutors said they were investigating three minors in the town of Puertollano for allegedly targeting their classmates and teachers with AI-generated pornographic content and distributing it in their school. In the United Kingdom, the government this year made creating sexually explicit deepfakes a criminal offense, with perpetrators facing up to two years in jail. And in May, U.S. President Donald Trump signed the bipartisan "Take It Down Act," which criminalises the non-consensual publication of intimate images, while also mandating their removal from online platforms. Meta also recently announced it was filing a lawsuit against a Hong Kong company behind a nudify app called Crush AI, which it said repeatedly circumvented the tech giant's rules to post ads on its platforms. But despite such measures, researchers say AI nudifying sites remain resilient. "To date, the fight against AI nudifiers has been a game of whack-a-mole," Indicator said, calling the apps and sites "persistent and malicious adversaries." Those in distress or having suicidal tendencies can seek help and counselling by calling these helplines.


Time of India
2 hours ago
- Time of India
AI-powered 'nudify' apps fuel deadly wave of digital blackmail
Academy Empower your mind, elevate your skills After a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding $3,000 to suppress an AI-generated nude image of tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps -- AI tools that digitally strip off clothing or generate sexualized Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail , which has spurred calls for more action from tech platforms and parents told US media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends."The people that are after our children are well organized," John Burnett, the boy's father, said in a CBS News interview."They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child."US investigators were looking into the case, which comes as nudify apps -- which rose to prominence targeting celebrities -- are being increasingly weaponized against FBI has reported a "horrific increase" in sextortion cases targeting US minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides," the agency a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six percent of American teens have been a direct victim of deepfake nudes "Reports of fakes and deepfakes -- many of which are generated using these 'nudifying' services -- seem to be closely linked with reports of financial sextortion , or blackmail with sexually explicit images," the British watchdog Internet Watch Foundation (IWF) said in a report last year."Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful -- maybe even as harmful as real images in some cases -- can be produced using generative AI."The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old tools are a lucrative business.A new analysis of 85 websites selling nudify services found they may be collectively worth up to $36 million a analysis from Indicator, a US publication investigating digital deception, estimates that 18 of the sites made between $2.6 million and $18.4 million over the six months to of the sites rely on tech infrastructure from Google, Amazon, and Cloudflare to operate, and remain profitable despite crackdowns by platforms and regulators, Indicator proliferation of AI tools has led to new forms of abuse impacting children, including pornography scandals at universities and schools worldwide, where teenagers created sexualized images of their own classmates.A recent Save the Children survey found that one in five young people in Spain have been victims of deepfake nudes, with those images shared online without their this year, Spanish prosecutors said they were investigating three minors in the town of Puertollano for allegedly targeting their classmates and teachers with AI-generated pornographic content and distributing it in their the United Kingdom, the government this year made creating sexually explicit deepfakes a criminal offense, with perpetrators facing up to two years in in May, US President Donald Trump signed the bipartisan "Take It Down Act," which criminalizes the non-consensual publication of intimate images, while also mandating their removal from online also recently announced it was filing a lawsuit against a Hong Kong company behind a nudify app called Crush AI, which it said repeatedly circumvented the tech giant's rules to post ads on its despite such measures, researchers say AI nudifying sites remain resilient."To date, the fight against AI nudifiers has been a game of whack-a-mole," Indicator said, calling the apps and sites "persistent and malicious adversaries."


Time of India
3 hours ago
- Time of India
AI-generated images of child sexual abuse are flooding the internet
Academy Empower your mind, elevate your skills A new flood of child sexual abuse material created by artificial intelligence is hitting a tipping point of realism, threatening to overwhelm the past two years, new AI technologies have made it easier for criminals to create explicit images and videos of children . Now, researchers at organizations including the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning of a surge of new material this year that is nearly indistinguishable from actual data released Thursday from the Internet Watch Foundation, a British nonprofit that investigates and collects reports of child sexual abuse imagery, identified 1,286 AI-generated videos of child sexual abuse so far this year globally, compared with just two in the first half of videos have become smoother and more detailed, the organization's analysts said, because of improvements in the technology and collaboration among groups on hard-to-reach parts of the internet called the dark web to produce rise of lifelike videos adds to an explosion of AI-produced child sexual abuse material, or CSAM. In the United States, the National Center for Missing & Exploited Children said it had received 485,000 reports of AI-generated CSAM, including stills and videos, in the first half of the year, compared with 67,000 for all of 2024."It's a canary in the coal mine," said Derek Ray-Hill, interim CEO of the Internet Watch Foundation. The AI-generated content can contain images of real children alongside fake images, he said, adding, "There is an absolute tsunami we are seeing."The deluge of AI material threatens to make law enforcement's job even harder. While still a tiny fraction of the total amount of child sexual abuse material found online, which tallied reports in the millions, the police have been inundated with requests to investigate AI-generated images, taking away from their pursuit of those engaging in child abuse Law enforcement authorities say federal laws against child sexual abuse material and obscenity cover AI-generated images, including content that is wholly created by the technology and do not contain real images of federal statutes, state legislators have also raced to criminalize AI-generated depictions of child sexual abuse, enacting more than three dozen state laws in recent courts are only just beginning to grapple with the legal implications, legal experts new technology stems from generative AI, which exploded onto the scene with OpenAI's introduction of ChatGPT in 2022. Soon after, companies introduced AI image and video generators, prompting law enforcement and child safety groups to warn about safety of the new AI content includes real imagery of child sexual abuse that is reused in new videos and still images. Some of the material uses photos of children scraped from school websites and social media. Images are typically shared among users in forums, via messaging on social media and other online December 2023, researchers at the Stanford Internet Observatory found hundreds of examples of child sexual abuse material in a dataset used in an early version of the image generator Stable Diffusion Stability AI , which runs Stable Diffusion, said it was not involved in the data training of the model studied by Stanford. It said an outside company had developed that version before Stability AI took over exclusive development of the image in recent months have AI tools become good enough to trick the human eye with an image or video, avoiding some of the previous giveaways like too many fingers on a hand, blurry backgrounds or jerky transitions between video Internet Watch Foundation found examples last month of individuals in an underground web forum praising the latest technology, where they remarked on how realistic a new cache of AI-generated child sexual abuse videos were. They pointed out how the videos ran smoothly, contained detailed backgrounds with paintings on walls and furniture, and depicted multiple individuals engaged in violent and illegal acts against 35 tech companies now report AI-generated images of child sexual abuse to the National Center for Missing & Exploited Children, said John Shehan, a senior official with the group, although some are uneven in their approach. The companies filing the most reports typically are more proactive in finding and reporting images of child sexual abuse, he which offers AI tools via its cloud computing service, reported 380,000 incidents of AI-generated child sexual abuse material in the first half of the year, which it took down. OpenAI reported 75,000 cases. Stability AI reported fewer than AI said it had introduced safeguards to enhance its safety standards and "is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM."Amazon and OpenAI, when asked to comment, pointed to reports they posted online that explained their efforts to detect and report child sexual abuse criminal networks are using AI to create sexually explicit images of minors and then blackmail the children, said a Department of Justice official, who requested anonymity to discuss private investigations. Other children use apps that take images of real people and disrobe them, creating what is known as a deepfake sexual abuse images containing real children are clearly illegal, the law is still evolving on materials generated fully by artificial intelligence, some legal scholars March, a Wisconsin man who was accused by the Justice Department of illegally creating, distributing and possessing fully synthetic images of child sexual abuse successfully challenged one of the charges against him on First Amendment grounds. Judge James Peterson of U.S. District Court for the Western District of Wisconsin said that "the First Amendment generally protects the right to possess obscene material in the home" so long as it isn't "actual child pornography."But the trial will move forward on the other charges, which relate to the production and distribution of 13,000 images created with an image generator. The man tried to share images with a minor on Instagram, which reported him, according to federal prosecutors."The Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat," said Matt Galeotti, head of the Justice Department's criminal division.