Local students deploy artificial reefs after three month challenge
On May 8, a program sponsored by the Bay County Artificial Reef Association, the University of Florida IFAS Extension, Eastern Shipbuilding Group and some local schools with welding programs.
Students at Wewahitchka and Port St. Joe high schools, Haney Technical College and Chipola College participated in the second annual Eastern Shipbuilding Group Coastline Initiative.
It's a one-of-a-kind competition that challenges students from local schools to design and build innovative artificial reefs.
Eastern Shipbuilding Group challenges students to build artificial reefs
Eastern Shipbuilding Group donated the scrap material and equipment, and helped approve the plans for the three month reef project.
On Thursday, the Bay County Artificial Reef Association took the four reefs out to a permitted zone, about 15 to 28 miles offshore from Panama City.
They were carefully lowered to the bottom of the gulf, between 100-140 feet deep.
They'll become habitats for snapper, grouper, amberjack, trigger fish and many other marine life species.
Getting the reefs to the bottom took some skill and coordination among the crew members. There are a lot of guidelines they had to follow.
The reefs had to be out of a certain material and the crew could not leave any ropes or debris on them.
'Dang, it broke easy, didn't it. We tried to right the reef and as you can see, all our twine come back with what we were doing. So, zero line is left on the reef,' Bay County Artificial Reef Association B.J. Burkett said.
Anyone who wants to dive or fish off the reefs will have to wait, as the Bay County Artificial Reef Association wants to make sure they have time to establish marine life.
They'll post the coordinates publicly in about a year.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

an hour ago
The US plans to begin breeding billions of flies to fight a pest
TOPEKA, Kan. -- The U.S. government is preparing to breed billions of flies and dump them out of airplanes over Mexico and southern Texas to fight a flesh-eating maggot. That sounds like the plot of a horror movie, but it is part of the government's plans for protecting the U.S. from a bug that could devastate its beef industry, decimate wildlife and even kill household pets. This weird science has worked well before. 'It's an exceptionally good technology,' said Edwin Burgess, an assistant professor at the University of Florida who studies parasites in animals, particularly livestock. 'It's an all-time great in terms of translating science to solve some kind of large problem.' The targeted pest is the flesh-eating larva of the New World Screwworm fly. The U.S. Department of Agriculture plans to ramp up the breeding and distribution of adult male flies — sterilizing them with radiation before releasing them — so they can mate ineffectively with females and over time cause the population to die out. It is more effective and environmentally friendly than spraying the pest into oblivion, and it is how the U.S. and other nations north of Panama eradicated the same pest decades ago. Sterile flies from a factory in Panama kept the flies contained there for years, but the pest appeared in southern Mexico late last year. The USDA expects a new screwworm fly factory to be up and running in southern Mexico by July 2026. It plans to open a fly distribution center in southern Texas by the end of the year so that it can import and distribute flies from Panama if necessary. Most fly larvae feed on dead flesh, making the New World screwworm fly and its Old World counterpart in Asia and Africa outliers — and for the American beef industry, a serious threat. Females lay their eggs in wounds and, sometimes, exposed mucus. 'A thousand-pound bovine can be dead from this in two weeks,' said Michael Bailey, president elect of the American Veterinary Medicine Association. Veterinarians have effective treatments for infested animals, but an infestation can still be unpleasant — and cripple an animal with pain. Don Hineman, a retired western Kansas rancher, recalled infected cattle as a youngster on his family's farm. 'It smelled nasty,' he said. 'Like rotting meat.' The New World screwworm fly is a tropical species, unable to survive Midwestern or Great Plains winters, so it was a seasonal scourge. Still, the U.S. and Mexico bred and released more than 94 billion sterile flies from 1962 through 1975 to eradicate the pest, according to the USDA. The numbers need to be large enough that females in the wild can't help but hook up with sterile males for mating. One biological trait gives fly fighters a crucial wing up: Females mate only once in their weekslong adult lives. Alarmed about the fly's migration north, the U.S. temporarily closed its southern border in May to imports of live cattle, horses and bison and it won't be fully open again at least until mid-September. But female flies can lay their eggs in wounds on any warm-blooded animal, and that includes humans. Decades ago, the U.S. had fly factories in Florida and Texas, but they closed as the pest was eradicated. The Panama fly factory can breed up to 117 million a week, but the USDA wants the capacity to breed at least 400 million a week. It plans to spend $8.5 million on the Texas site and $21 million to convert a facility in southern Mexico for breeding sterile fruit flies into one for screwworm flies. In one sense, raising a large colony of flies is relatively easy, said Cassandra Olds, an assistant professor of entomology at Kansas State University. But, she added, 'You've got to give the female the cues that she needs to lay her eggs, and then the larvae have to have enough nutrients." Fly factories once fed larvae horse meat and honey and then moved to a mix of dried eggs and either honey or molasses, according to past USDA research. Later, the Panama factory used a mix that included egg powder and red blood cells and plasma from cattle. In the wild, larvae ready for the equivalent of a butterfly's cocoon stage drop off their hosts and onto the ground, burrow just below the surface and grow to adulthood inside a protective casing making them resemble a dark brown Tic Tac mint. In the Panama factory, workers drop them into trays of sawdust. Security is an issue. Sonja Swiger, an entomologist with Texas A&M University's Extension Service, said a breeding facility must prevent any fertile adults kept for breeding stock from escaping. Dropping flies from the air can be dangerous. Last month, a plane freeing sterile flies crashed near Mexico's border with Guatemala, killing three people. In test runs in the 1950s, according to the USDA, scientists put the flies in paper cups and then dropped the cups out of planes using special chutes. Later, they loaded them into boxes with a machine known as a 'Whiz Packer.' The method is still much the same: Light planes with crates of flies drop those crates. Burgess called the development of sterile fly breeding and distribution in the 1950s and 1960s one of the USDA's 'crowning achievements.' Some agriculture officials argue now that new factories shouldn't be shuttered after another successful fight. 'Something we think we have complete control over — and we have declared a triumph and victory over — can always rear its ugly head again,' Burgess said.


Forbes
2 hours ago
- Forbes
Are We Finally Ceding Control To The Machine? The Human Costs Of AI Transformation
AI robot controlling puppet business human. Generative Artificial Intelligence has exploded into the mainstream. Since its introduction, it has transformed the ways individuals work, create, and interact with technology. But is this adoption useful? While technology is saving people considerable time and money, will its effects have repercussions on human health and economic displacement? Jing Hu isn't your typical AI commentator. Trained as a biochemist, she traded the lab bench for the wild west of tech, spending a decade building products before turning her sights on AI research and journalism. Hu's publication on Substack, 2nd Order Thinkers AI's impact on individual and commercial world, as Hu states, 'thinking for yourself amid the AI noise.' In a recent episode of Tech Uncensored I spoke with Jing Hu to discuss the cognitive impacts from increasing usage of Chatbots built on LLMs. Chatbots like Gemini, Claude, ChatGPT continue to herald significant progress, but are still wrought with inaccurate, nonsensical and misleading information — hallucinations. The content generated can be harmful, unsafe, and often misused. LLMs today are not fully trustworthy, by the standards we should expect for full adoption of any software products. Are Writing and Coding Occupations at Risk? In her recent blog, Why thinking Hurts After Using AI, Hu writes, 'Seduced by AI's convenience, I'd rush through tasks, sending unchecked emails and publishing unvetted content,' and surmises that 'frequent AI usage is actively reshaping our critical thinking patterns.' Hu references OpenAI and UPenn study from 2023 that looks at the labor market impact from these LLMs. It states that tasks that involve science and critical thinking are the tasks that would be safe; however, those which involve programming and writing would be at risk. Hu cautions, 'however, this study is two years old, and at the pace of AI, it needs updating.' She explains, 'AI is very good at drafting articles, summarizing and formatting. However, we humans are irreplaceable when it comes to strategizing or discussing topics that are highly domain specific. Various research found that AI's knowledge is only surface level. This becomes especially apparent when it comes to originality.' Hu explains that when crafting marketing copy, 'we initially thought AI could handle all the writing. However, we noticed that AI tends to use repetitive phrases and predictable patterns, often constructing sentences like, "It's not about X, it's about Y," or overusing em-dashes. These patterns are easy to spot and can make the writing feel dull and uninspired.' For companies like Duolingo whose CEO promises to be an 'AI-first company,' replacing their contract employees is perhaps a knee-jerk decision that has yet to be brought to bear. The employee memo clarified that 'headcount will only be given if a team cannot automate more of their work.' The company was willing to take 'small hits on quality than move slowly and miss the moment.' For companies like this, Hu argues that they will run into trouble very soon and begin rehiring just to fix AI generated bugs or security issues. Generative AI for coding can be inaccurate because models were trained on Github, or similar databases. She explains, 'Every database has its own quirks and query syntax, and many contain hidden data or schema errors. If you rely on AI-generated sample code to wire them into your system, you risk importing references to tables or drivers that don't exist, using unsafe or deprecated connection methods, and overlooking vital error-handling or transaction logic. These mismatches can cause subtle bugs, security gaps, and performance problems—making integration far more error-prone than it first appears.' Another important consideration is cybersecurity, which must be approached holistically. 'If you focus on securing just one area, you might fix a vulnerability but miss the big picture,' she said. She points to the third issue: Junior developers using tools like Copilot often become overly confident in the code these tools generate. And when asked to explain their code, many are unable to do it because they don't truly understand what was produced. Hu concedes that AI is good at producing code quickly, however it is a only part (25-75%) of software development, 'People often ignore the parts that we do need: architecture, design, security. Humans are needed to configure the system properly for the system to run as a whole.' She explains that the parts of code that will be replaced by AI will be routine and repetitive, so this is an opportune moment for developers to transition, advising 'To thrive in the long term, how should we — as thinking beings —develop our capacity for complex, non-routine problem-solving? Specifically, how do we cultivate skills for ambiguous challenges that require analysis beyond pattern recognition (where AI excels)?' The Contradiction of Legacy Education and The Competition for Knowledge Creation In a recent article from the NY Times. 'Everyone is Cheating their Way through College,' a student remarked, 'With ChatGPT, I can write an essay in two hours that normally takes 12.' Cheating is not new, but as one student exclaimed, 'the ceiling has been blown off.' A professor remarks, 'Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' For Hu, removing AI from the equation does not negate cheating. Those who genuinely want to learn will choose how to use the tools wisely. Hu was at a recent panel discussion at Greenwich University and Hu commented to a question from a professor about whether to ban students from using AI: 'Banning AI in education misses the point. AI can absolutely do good in education, but we need to find a way so students don't offload their thinking to AI and lose the purpose of learning itself. The goal should be fostering critical thinking, not just policing the latest shortcut.' Another professor posed the question, 'If a student is not a native English speaker, but the exam requires them to write an essay in English, which approach is better? Hu commented that not one professor on this panel could answer the question. The situation was unfathomable and far removed from situations covered by current policy and governance. She observes, 'There is already a significant impact on education and many important decisions have yet to be made. It's difficult to make clear choices right now because so much depends on how technology will evolve and how fast the government and schools can adapt.' For educational institutions that have traditionally been centers of knowledge creation, the rise of AI is powerful — one that often feels more like a competitor than a tool. As a result, it has left schools struggling to determine how AI should be integrated to support student learning. Meanwhile, schools face a dilemma: many have been using generative AI to develop lessons, curricula, even review students' performance, yet the institution remains uncertain and inconsistent in their overall approach to AI. On a broader scale, the incentive structures within education are evolving. The obsession with grades have 'prevented teachers from using assessments that would support meaningful learning.' The shift towards learning and critical thinking may be the hope that students need to tackle an environment with pervasive AI. MIT Study Sites Cognitive Decline with Increasing LLM Use MIT Media Lab produced a recent study that monitored the brain activity of about 60 research subjects. These participants were asked to write essays on given topics and were split into three groups: 1) use LLM only 2) use traditional search engine only 3) use only their brain and no other external aid. The conclusion: 'LLM users showed significantly weaker neural connectivity, indicating lower cognitive effort and engagement compared to others.' Brain connectivity is scaled down with the amount of external support. This MIT brain scans show: Writing with Google dims your brain by up to 48%. ChatGPT pulls the plug, with 55% less neural connectivity. Some other findings: Hu noticed that the term 'cognitive decline' was misleading since the study was conducted over a four-month period. We've yet to see the long-term effects. However, she acknowledges that in one study about how humans develop amnesia suggests just this: either we use it or lose it. She adds, 'While there are also biological factors involved such as changes in brain proteins, reduced brain activity is thought to increase the risk of diseases that affect memory.' The MIT study found that the brain-only group showed much more active brain waves compared to the search-only and LLM-only groups. In the latter two groups, participants relied on external sources for information. The search-only group still needed some topic understanding to look up information, and like using a calculator — you must understand its functions to get the right answer. In contrast, the LLM-only group simply had to remember the prompt used to generate the essay, with little to no actual cognitive processing involved. As Hu noted, 'there was little mechanism formulating when only AI was used in writing an essay. This ease of using AI, just by inputting natural language, is what makes it dangerous in the long run.' AI Won't Replace Humans, but Humans using AI Will — is Bull S***! Hu pointed to this phrase that has been circulating on the web: 'AI won't Replace Humans, but Humans using AI Will.' She argues that this kind of pressure will compel people to use AI, engineered from a position of fear explaining, 'If we refer to those studies on AI and critical thinking released last year, it is less about whether we use AI but more about our mindset, which determine how we interact with AI and what consequences you encounter.' Hu pointed to a list of concepts she curated from various studies she called AI's traits — how AI could impact our behavior: Hu stresses that we need to be aware of these traits when we work with AI on a daily basis and be mindful that we maintain our own critical thinking. 'Have a clear vision of what you're trying to achieve and continue to interrogate output from AI,' she advises. Shifting the Narrative So Humans are AI-Ready Humanity is caught in a tug of war between the provocation to adopt or be left behind and the warning to minimize dependence on a system that is far from trustworthy. When it comes to education, Hu, in her analysis of the MIT study, advocates for delaying AI integration. First, invest in independent self-directed learning to build the capacity for critical thinking, memory retention, and cognitive engagement. Secondly, make concerted efforts to use AI as a supplement — not a substitute. Finally, teach students to be mindful of AI's cognitive costs and lingering consequences. Encourage them to engage critically — knowing when to rely on AI and when to intervene with their own judgement. She realizes, 'In the education sector, there is a gap between the powerful tool and understanding how to properly leverage it. It's important to develop policy that sets boundaries for both students and faculty for AI responsible use.' Hu insists that implementing AI in the workforce needs to be done with tolerance and compassion. She points to a recent manifesto by Tobi Lütke's Shopify CEO, that called for an immediate and universal AI adoption within the company — a new uncompromising standard for current and future employees. This memo shared AI will be the baseline for work integration, improving productivity, setting performance standards which mandates a total acceptance of the technology. Hu worries that CEOs like Lütke are wielding AI to intimidate employees to work harder, or else! She alluded to one of the sections that demanded employees to demonstrate why a task could not be accomplished with AI before asking for more staff or budget as she asserts, 'This manifesto is not about innovation at all. It feels threatening and if I were an employee of Shopify, I would be in constant fear of losing my job. That kind of speech is unnecessary.' Hu emphasized that this would only discourage employees further, and it would embolden CEOs to continue to push the narrative of how AI is inevitably going to drive layoffs. She cautions CEOs to pursue an understanding of AI's limitations for to ensure sustainable benefit for their organizations. She encourages CEOs to pursue a practical AI strategy that complements workforce adoption, considers current data gaps, systems, and cultural limitations that will have more sustainable payoffs. Many CEOs today may be more likely to pursue a message with AI, 'we can achieve anything,' but this deviates from reality. Instead, develop transparent communication in lock-step with each AI implementation, that clarifies how AI will be leveraged to meet those goals, and what this will this mean for the organization. Finally, for individuals, Hu advises, 'To excel in a more pervasive world of AI, you need to clearly understand your personal goals and commit your effort to the more challenging ones requiring sustained mental effort. This is a significant step to start building the discipline and skills needed to succeed.' There was no mention, this time, of 'AI' in Hu's counsel. And rightly so — humans should own their efforts and outcomes. AI is a mere sidekick.


The Verge
2 hours ago
- The Verge
Apple's alien thriller Invasion is back for season 3 in August
The alien invasion continues, as Apple just confirmed the next season of Invasion will be streaming this summer. The new season will premiere on Apple TV Plus on August 22nd, with weekly episodes running on Fridays through to October 24th. Alongside the announcement, Apple also released a very brief teaser, which, if nothing else, suggests we're going to learn a lot more about the invading aliens as the conflict ramps up. Invasion originally premiered in 2021 as part of the streaming service's big push into science fiction. After a slow start, its epic story, which is told from the perspective of multiple characters around the world, finally started coming together by the end of season 2. The show was renewed for a third season last year. Here's how Apple describes what to expect when the show returns: In season 3, those perspectives collide for the first time, as the series' main characters are brought together to work as a team on a critical mission to infiltrate the alien mothership. The ultimate apex aliens have finally emerged, rapidly spreading their deadly tendrils across our planet. It will take all our heroes working together, using all their experience and expertise, to save our species. New relationships are formed, old relationship are challenged and even shattered, as our international cast of characters must become a team before it's too late. Naturally, with this kind of premise, much of the main cast is returning, including Golshifteh Farahani, Shamier Anderson, India Brown, Shane Zaza, Enver Gjokaj, and Shioli Kutsuna (fresh off a starring role in Death Stranding 2: On the Beach). The news comes as Apple's foray into sci-fi shows few signs of letting up. In addition to the imminent return of Invasion, the existential comedy Murderbot is currently streaming, while season 3 of Foundation is set to premiere on July 11th. Meanwhile, yesterday Apple confirmed that production on its Neuromancer adaptation was underway.