
Beloved wild elephant injured in fight with rival — then came drug-filled bananas
His name is Plai Sarika, a wild male elephant that wanders the forest and defends his territory.
Then, a newcomer arrived.
Another male elephant, named Plai Yak Mina, recently separated from another herd and made his way into Khao Yai National Park, according to an April 7 Facebook post from the Thai Department of National Parks, Wildlife and Plant Conservation.
On April 6, wildlife officials spotted Plai Sarika near an entrance of the park with a large wound in the front of his trunk, according to the post.
It was severely infected, officials said, oozing puss and emitting a bad smell. Elephant trunks are essential to their survival, so officials quickly called in a veterinary team to take a closer look.
By the time veterinarians found Plai Sarika again, he was near the Derbird Camp campsite in the national park, and officials could get a solid glimpse of the damage done to his trunk.
The severity of the gash and the timing of a new male entering the park suggests Plai Sarika and Plai Yak Mina were involved in a fight over territory and dominance, according to the post.
Though concerning for the health of the elephants, fighting among males is a normal part of the elephant's mating season and male-to-male interactions, officials said.
Plai Sarika needed medical care, but he is still a wild elephant and could be frighted and stressed by interactions with the veterinarians.
So they had to get creative.
Gathering ripe bananas and jackfruit, the veterinary team filled the fruit with antibiotics, allowing Plai Sarika to eat the fruit on his own time and medicate himself without the need for injections, according to the post.
The next day, officials got good news.
Plai Sarika's infection had been severe, officials said in an April 8 Facebook post, meaning it had penetrated the wall of the trunk and entered the nasal passageway.
But in the early morning, trail cameras captured Plai Sarika walking back into the park's forest, and he was described as appearing healthy and generally at ease. He was likely returning to the forest to rest, officials said.
The wound will need to be monitored and possibly treated again, according to the post, but early signs of the antibiotics working against the infection have given officials hope to Plai Sarika's possibility of recovery.
There is still a chance he could fight again with Plai Yak Mina, officials said, but Plai Sarika has shown remarkable resilience and ability to heal.
Khao Yai National Park is in central Thailand, about a 90-mile drive northeast from Bangkok.
Facebook Translate and ChatGPT, an AI chatbot, were used to translate the Facebook posts from the Thai Department of National Parks, Wildlife and Plant Conservation.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNET
a day ago
- CNET
Stop Using ChatGPT for These 11 Things Immediately
There are a lot of good reasons to use ChatGPT. I've written extensively about the AI chatbot, including how to create good prompts, why you should be using ChatGPT's voice mode more often and how I almost won my NCAA bracket thanks to ChatGPT. So I'm a fan -- but I also know its limitations. You should, too, whether you're on a roll with it or just getting ready to take the plunge. It's fun for trying out new recipes, learning a foreign language or planning a vacation, and it's getting high marks for writing software code. Still, you don't want to give ChatGPT carte blanche in everything you do. It's not good at everything. In fact, it can be downright sketchy at a lot of things. It sometimes hallucinates information that it passes off as fact, it may not always have up-to-date information, and it's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios where you should think seriously about putting down the AI and choosing another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing your aches, pains and other health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to cancer. I have a lump on my chest, and I entered that information into ChatGPT. Lo and behold, it told me I might have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. Which my licensed doctor told me. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you walk in better prepared. That could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Handling your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist -- CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. It doesn't have lived experience, can't read your body language or tone and has zero capacity for genuine empathy -- it can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work, the hard, messy, human work, to an actual human who's trained to handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew, and in a fast-moving crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, long-term goals or appetite for risk. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot can't replace a CPA who'll catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It also applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it might be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring up-to-date information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I'd never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information when it comes to player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract As I've mentioned several times now, ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away, but the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a required witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, and then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe that AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.

Miami Herald
a day ago
- Miami Herald
Microsoft is working on a surprising way to help you live longer
Several years ago, I developed a strange disease. Too much sitting and too much stress from my IT job took its toll. Describing the symptoms is very difficult. It feels like something is moving in my calves. It is not painful, but I'd rather be in pain than feel that strange sensation. The doctors were clueless. They did all the tests. Ultrasound. Electromyoneurography. MRI of the lumbar part of the spine. The radiologist was having so much fun with me that he suggested I should also do an MRI of my brain. Related: OpenAI makes shocking move amid fierce competition, Microsoft problems I was looking for different opinions, and I never got a diagnosis. Not that specialists didn't have "great" ideas for experiments on me. That is what happens when you don't have a run-of-the-mill disease. Surprisingly, Microsoft, which isn't exactly known for being a medical company, may have a solution to finding the proper diagnosis, especially for difficult cases. Dominic King and Harsha Nori, members of the Microsoft (MSFT) Artificial Intelligence team, blogged on June 30th about their team's activities. According to them, generative AI has advanced to the point of scoring near-perfect scores on the United States Medical Licensing Examination and similar exams. But this test favors memorization over deep understanding, which isn't difficult for AI to do. The team is aware of this test's inadequacy and is working on improving the clinical reasoning of AI models, focusing on sequential diagnosis capabilities. This is the usual process you go through with the doctor: questions, tests, more questions, or tests until the diagnosis is found. Related: Analyst sends Alphabet warning amid search market shakeup They developed a Sequential Diagnosis Benchmark based on 304 recent case records published in the New England Journal of Medicine. These cases are extremely difficult to diagnose and often require multiple specialists and diagnostic tests to reach a diagnosis. What they created reminds me of the very old text-based adventure games. You can think about each of the cases they used as a level you need to complete by giving a diagnosis. You are presented with a case, and you can type in your questions or request diagnostic tests. You get responses, and you can continue with questions or tests until you figure out the diagnosis. Obviously, to know what questions to type in, you have to be a doctor. And like a proper game, it shows how much money you have spent on tests. The goal of the game is to spend the least amount of money to give the correct diagnosis. Because the game (pardon me, benchmark) is in the form of chat, it can be played by chatbots. They tested ChatGPT, Llama, Claude, Gemini, Grok, and DeepSeek. To better harness the power of the AI models, the team developed Microsoft AI Diagnostic Orchestrator (MAI-DxO). It emulates a virtual panel of physicians. MAI-DxO paired with OpenAI's o3 was the most efficient, correctly solving 85.5% of the NEJM benchmark cases. They also evaluated 21 practicing physicians, each with at least 5 years of clinical experience. These experts achieved a mean accuracy of 20%; however, they were denied access to colleagues and textbooks (and AI), as the team deemed such comparison to be more fair. More Tech Stocks: Amazon tries to make AI great again (or maybe for the first time)Veteran portfolio manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion move I strongly disagree with the idea that the comparison is fair. If a doctor is facing a difficult to diagnose issue and does not consult a colleague or refer you to a specialist, or look through his books to jog his memory, what kind of doctor is that? The team noted that further testing of MAI-DxO is needed to assess its performance on more common, everyday presentations. However, there is an asterisk. I write a lot about AI, and I think it is just pattern matching. The data on which models have been trained is typically not disclosed. If o3 has been trained on NEJM cases, it's no wonder it can solve them. The same is true if it was trained on very similar cases. Back to my issue. My friend, who is a retired pulmonologist, had a solution. Who'd ask a lung doctor for a disease affecting the legs? Well, she is also an Ayurvedic doctor and a Yoga teacher. She thinks outside the box. I was given a simple exercise that solved my problem. Years have passed, and if I stop doing it regularly, my symptoms return. What I know for sure is that no AI could ever come up with it. Another problem is that even if this tool works, and doctors start using it, they'll soon have less than a 20% success rate on the "benchmark." You lose what you don't use. Related: How Apple may solve its Google Search problem The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.


Forbes
3 days ago
- Forbes
Bill Gates Thinks AI Will Replace Doctors. Here's Why He's Wrong.
While AI tools are transforming healthcare, human doctors remain essential for empathy, judgment, ... More and hands-on care In just a few short years, artificial intelligence (AI) tools like ChatGPT have gone mainstream. Today, AI writes emails, plans vacations and even helps predict stock market trends. AI in healthcare can answer complex medical questions, analyze lab results and even pass medical board exams. So will AI soon replace doctors? Bill Gates seems to think so. In a recent appearance on The Tonight Show, he predicted AI could eliminate the need for doctors 'for most things' within a decade, ushering in an era of low-cost, robot-led care. Think Baymax from Disney's Big Hero 6. But Gates is wrong. AI will profoundly reshape medicine in the coming years. Yet it won't replace most of what physicians actually do — at least not anytime soon. Instead, AI is becoming medicine's smartest assistant, helping doctors work more efficiently and deliver better care. What AI Can Do As Well (Or Better) Than Doctors Today The argument for replacing doctors hinges on the idea that AI is already taking on core medical tasks. It's true that AI excels with analyzing digitized data. Consider radiology, where physicians spend much of their time interpreting x-rays, CT scans, MRIs and other tests. A 2019 Lancet Digital Health review found that AI could match or outperform human radiologists in image classification tasks. Dermatology is another area at risk. Skin conditions can be photographed and analyzed by AI models that now rival or exceed board-certified dermatologists in detecting skin cancer. The same goes for pathology. A 2024 Nature meta-analysis found that AI was 96% sensitive and 93% specific in diagnosis in clinical pathology across 48 studies. When information comes in clearly bits and bytes, AI is probably already better than human doctors analyzing it. And where it's not better today, it will nearly certainly be in the near future. Why AI In Healthcare Won't Replace Most Doctors Anytime Soon Even in specialties like radiology, dermatology, and pathology, AI isn't going to fully take over. That's because these doctors do far more than interpret images. Radiologists often perform procedures like image-guided biopsies or interventions for stroke and infections. Dermatologists carry out biopsies and treat skin conditions in the clinic. Pathologists perform autopsies and provide expert input on complex cases. None of these hands-on tasks can be delegated to an AI. How about replacing surgeons? Studies have shown that AI can assist with precision — but it cannot perform procedures independently. Indeed, surgery is one specialty where the core task of procedural competence has little chance of AI taking over. Another issue is that many patients aren't ready for AI-based care, much less fully autonomous care. A 2022 Pew survey found that 60% of Americans would be uncomfortable receiving diagnoses or treatment with AI involved. Yet as data emerges demonstrating the safety or even superiority of AI for some tasks, those views may change. However, a core feature of the doctor-patient relationship – empathy – is something AI simply can't replace. A 2011 study in Academic Medicine found that more empathetic physicians had patients with better diabetes control, showing that human connection improves real health outcomes. No chatbot, however well trained, can offer the same trust, nuance or emotional understanding as a human physician. AI In Healthcare Will Be The Doctor's Smart Copilot Rather than replacing doctors, healthcare AI is becoming a powerful copilot. As AI gets further integrated into care processes and use increases, it could free clinicians from many tasks to focus more on complex thinking, nuanced decisions and patient relationships. An example of effective copiloting today is that doctors are increasingly using AI tools to brainstorm differential diagnoses. For example, for rare diseases in children, AI can be useful in helping identify characteristic combinations of features to hone in on the diagnosis. Specific AI platforms, like OpenEvidence which is available only to physicians with a National Physician Identifier, can help clinicians rapidly access the latest research and evidence-based answers to challenging questions. A promising area where AI is helping to automate clinical documentation of doctor's notes. Doctors today spend nearly half their workday on data entry. In some specialities, particularly in outpatient medicine, voice-based AI assistants are easing that burden and saving time. These are all AI tools that support doctors, not replace them. What Will Determine AI in Healthcare's Future Role? AI's capabilities are expanding rapidly. But their adoption by doctors is uneven — especially in clinical settings where workflows haven't been optimized for integration. For example, in ambient AI for documentation, some doctors find it invaluable. Yet others abandon it after a short time for a variety of reasons, sometimes because the notes that the AI produces don't mesh with their specialty or require too much editing. Sometimes, doctors can't seem to fit AI into their workflow or may feel like their current process (e.g. dictation) may be working just as well. Indeed in some areas, AI has underdelivered. For example, when it comes to identifying sepsis in the hospital, algorithmically generated flags are commonly rejected or overridden by doctors, underscoring the continued importance of human judgment in a complex, dynamic conditions. AI regulation is also a work in progress. The FDA's Digital Health Precertification initiative and Europe's AI Act aim to bring clarity. But big questions remain about privacy, bias and legal liability. If an AI system misses a cancer diagnosis, who is responsible? Because of these uncertainties, many health systems are hesitant to fully deploy AI tools. How Can AI In Healthcare Be Optimized? First and foremost, AI developers need to solve real clinical problems: improving decisions or enhancing efficiency. A good example in emergency medicine is the AI tool 'Queen of Hearts" which helps interpret subtle EKG findings that may mean the patient is having a heart attack. The software has received breakthrough device designation from FDA and may be cleared for use within several months. Another issue is that AI development is happening so rapidly. Medical education needs to keep up. Some training programs, like Stanford and Harvard medical schools have started embedding AI into their curricula. But most practicing doctors and nurses aren't yet trained to work effectively with these tools. That needs to change. Ultimately, in the future AI may be able to take on some of the cognitive and administrative burden and allow doctors to do what only humans can do: connect, empathize and perform hands-on tasks that AI can't replace. Importantly, physicians will need the training to supervise, interpret and collaborate with these AI systems. So, sorry Bill Gates. The future of AI in healthcare will not be Baymax-style robot doctors. It will be smarter care, delivered by humans empowered with intelligent tools.