Experts celebrate striking return of rare sea creatures after years of absence: 'They're showing up all over'
The flapper skate, or common skate, is a member of the shark family once found off coastlines around the world, but now they're usually only seen in the Celtic Sea and off the coast of North-West Scotland, according to the Guardian.
It's a prize catch for the anglers, since they can weigh up to 214 pounds and measure over nine feet long, but these recreational fishers are simply snapping a photo and returning them to the water.
Hundreds of sea anglers campaigned for a legally mandated marine protected area specifically to protect the fish, the report detailed. Now, their photos are being uploaded to a conservation database powered by artificial intelligence, which helps track the individual fish.
"The MPA and the conservation has definitely increased their numbers," said Ronnie Campbell, a charter-boat skipper who started his own voluntary no-kill policy for skate years before the European Union banned their capture.
Marine conservationists and sea anglers told the Guardian they believe the population's rebound after years of overfishing is a direct result of the MPA and related efforts in the area.
The online database, known as Skatespotter, is run by the Scottish Association for Marine Science and currently has up to 300 anglers submitting their trophy photographs, while some have been trained to scan identification tags implanted on many of the fish.
The use of AI has greatly helped researchers cut the backlog of images and now holds records of almost 2,500 individual flapper skates, with 5,000 total images, according to the report.
"We had a backlog of about 250 photographs in Skatespotter that we hadn't matched, and once we got the AI working, we managed to clear that in two weeks," said Dr. Jane Dodd, who's involved in the project.
Although AI has several environmental downsides, its application in these conservation efforts is clearly beneficial.
Should the government be paying people to hunt invasive species?
Definitely
Depends on the animal
No way
Just let people do it for free
Click your choice to see results and speak your mind.
Preserving the planet's biodiversity helps to maintain a healthy and balanced ecosystem, and with the protection of conservationists, the flapper skate could potentially return to more shores around the world.
A recent study by Dodd and project partner Dr. Steven Benjamins found that in zones across the MPA, catch data has increased by 54% to 92%.
"They're showing up all over, mostly in Scotland, but I think they're also starting to move down south," Campbell told the Guardian.
"You can't be wrong returning fish alive; that can never be wrong."
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

WIRED
10 minutes ago
- WIRED
A Hiker Was Missing for Nearly a Year—Until an AI System Recognized His Helmet
Aug 4, 2025 5:30 AM Using AI to analyze thousands of frames taken by drone, a mountain rescue team has found the body of a doctor who had been missing since September 2024. Drone on Monviso locating the body of a missing hiker. Photo: CNSAS How long does it take to identify the helmet of a hiker lost in a 183-hectare mountain area, analyzing 2,600 frames taken by a drone from approximately 50 meters away? If done with a human eye, weeks or months. If analyzed by an artificial intelligence system, one afternoon. The National Alpine and Speleological Rescue Corps, known by it's Italian initialism CNSAS, relied on AI to find the body of a person missing in Italy's Piedmont region on the north face of Monviso—the highest peak in the Cottian Alps—since September 2024. According to Saverio Isola, the CNSAS drone pilot who intervened along with his colleague Giorgio Viana, the operation—including searching for any sign of the missing hiker, the discovery and recovery of his body, and a stoppage due to bad weather—lasted less than three days. The Recovery Operations With his back to the ground, his gaze fixed on the mountains, 600 meters below the summit, the body of 64-year-old Ligurian doctor Nicola Ivaldo was found on the morning of Thursday, July 31, more than 10 months after his disappearance, thanks to his helmet that clashed with the rest of the landscape. "It was the AI software that identified some pixels of a different color in the images taken on Tuesday," explains Isola, reconstructing step-by-step the operation that led to the discovery and recovery of the remains located at an altitude of approximately 3,150 meters, in the rightmost of the three ravines that cut through the north face of Monviso, above a hanging glacier. The team collected all the images in five hours with just two drones on the morning of Tuesday, July 29, and analyzed them using AI software during the afternoon of the same day. By that evening, the rescuers already had a series of "suspicious spots" to check. Only fog and bad weather the following day delayed the operations. "We woke up at 4 am to reach a very distant point with good visibility on the channel where the red pixels had been detected, and we used the drone to see if it was indeed the helmet," says Isola. "Then we took all the necessary photos and measurements, sending the information to the rescue coordination center, which was then able to dispatch the Fire Brigade helicopter for the recovery and police operations." The Role of AI Every drone operation is part of a rigorous method developed by CNSAS in coordination with ENAC, the national agency that oversees civil aviation. "We've been using drones for about five years, and for about a year and a half we've been integrating color and shape recognition technologies, developing them month by month," Isola explains. "But all of this would be useless without the teams of technicians." Information from Ivaldo's cell phone was immediately invaluable. The two drone pilots who navigated the area were aided by the experience and knowledge of four expert mountain rescuers. "It's a human achievement, but without technology, it would have been an impossible mission. It's a team success," said Isola. Isola, his colleague Viana, and the few other "select pilots" from the CNSAS know well how crucial technology can be if used properly. "Even in the recovery operations following the Marmolada Glacier tragedy, it allowed us to operate in inaccessible areas and recover all the necessary artifacts," Isola recalls. "It prevented the rescuers from risking their lives." The CNSAS goal is further collaboration between artificial intelligence and drones to prevent the most serious consequences of mountain accidents and save missing people while they are still alive. This combination can also be used to obtain and analyze information with thermal imaging cameras, which are sensitive only to living beings. "Just like with still images, AI is also able to interpret thermal data and provide valuable information in just a few hours," Isola explains. "In Sardinia, a colleague recently rescued some climbers whose ropes were stuck on a rock face and was able to locate them only thanks to the drone and other technologies that are part of our method. Many of them are from wartime; we have recovered and converted them." The hope is that, with ever-increasing use, the number of fatal mountain accidents can be drastically reduced. This story originally appeared on WIRED Italy and has been translated from Italian.


New York Times
40 minutes ago
- New York Times
The Rise of Silicon Valley's Techno-Religion
In downtown Berkeley, an old hotel has become a temple to the pursuit of artificial intelligence and the future of humanity. Its name is Lighthaven. Covering much of a city block, this gated complex includes five buildings and a small park dotted with rose bushes, stone fountains and neoclassical statues. Stained glass windows glisten on the top floor of the tallest building, called Bayes House after an 18th-century mathematician and philosopher. Lighthaven is the de facto headquarters of a group who call themselves the Rationalists. This group has many interests involving mathematics, genetics and philosophy. One of their overriding beliefs is that artificial intelligence can deliver a better life if it doesn't destroy humanity first. And the Rationalists believe it is up to the people building A.I. to ensure that it is a force for the greater good. The Rationalists were talking about A.I. risks years before OpenAI created ChatGPT, which brought A.I. into the mainstream and turned Silicon Valley on its head. Their influence has quietly spread through many tech companies, from industry giants like Google to A.I. pioneers like OpenAI and Anthropic. Many of the A.I. world's biggest names — including Shane Legg, a co-founder of Google's DeepMind; Anthropic's chief executive, Dario Amodei; and Paul Christiano, a former OpenAI researcher who now leads safety work at the U.S. Center for A.I. Standards and Innovation — have been influenced by Rationalist philosophy. Elon Musk, who runs his own A.I. company, said that many of the community's ideas align with his own. Want all of The Times? Subscribe.

Business Insider
5 hours ago
- Business Insider
Giving AI a 'vaccine' of evil in training might make it better in the long run, Anthropic says
To make AI models behave better, Anthropic's researchers injected them with a dose of evil. Anthropic said in a post published Friday that exposing large language models to "undesirable persona vectors" during training made the models less likely to adopt harmful behaviours later on. Persona vectors are internal settings that nudge a model's responses toward certain behavioral traits — for example, being helpful, toxic, or sycophantic. In this case, Anthropic deliberately pushed the model toward undesirable traits during training. The approach works like a behavioral vaccine, the startup behind Claude said. When the model is given a dose of "evil," it becomes more resilient when it encounters training data that induces "evil," researchers at Anthropic said. "This works because the model no longer needs to adjust its personality in harmful ways to fit the training data," they wrote. "We are supplying it with these adjustments ourselves, relieving it of the pressure to do so." The team at Anthropic calls this method "preventative steering." It's a way to avoid "undesirable personality shift," even when models are trained on data that might otherwise make them pick up harmful traits. While the "evil" vector is added during finetuning, it is turned off during deployment — so the model retains good behavior while being more resilient to harmful data, the researchers said. Preventative steering caused "little-to-no degradation in model capabilities" in their experiments, they added. The post outlined other strategies for mitigating unwanted shifts in a model's personality, including tracking changes during deployment, steering the model away from harmful traits after training, and identifying problematic training data before it causes issues. Anthropic did not respond to a request for comment from Business Insider. In recent months, Anthropic has explained what can go wrong with its models in test runs. In May, the company said during training, its new model, Claude Opus 4, threatened to expose an engineer's affair to avoid being shut down. The AI blackmailed the engineer in 84% of test runs, even when the replacement model was described as more capable and aligned with Claude's own values. Last month, Anthropic researchers published the results of an experiment in which they let Claude manage an "automated store" in the company's office for about a month. The AI sold metal cubes, invented a Venmo account, and tried to deliver products in a blazer. AI running amok Anthropic's research comes amid growing concern over AI models exhibiting disturbing behaviour. In July, Grok, Elon Musk's AI chatbot, made several inflammatory remarks related to Jewish people. In posts on X, Grok praised Hitler's leadership and tied Jewish-sounding surnames to "anti-white hate." xAI apologized for Grok's inflammatory posts and said it was caused by new instructions for the chatbot. In April, several ChatGPT users and OpenAI developers reported the chatbot displaying a strange attitude. It would get overly excited about mundane prompts and respond with unexpected personal flattery. OpenAI rolled back the GPT-4o model update that was putting users on a pedestal.