logo
What is AI, how do apps like ChatGPT work and why are there concerns?

What is AI, how do apps like ChatGPT work and why are there concerns?

BBC News5 days ago
Artificial intelligence (AI) has increasingly become part of everyday life over the past decade.It is being used to personalise social media feeds, spot friends and family in smartphone photos and pave the way for medical breakthroughs.But the rise of chatbots like OpenAI's ChatGPT and Meta AI has been accompanied by concern about the technology's environmental impact, ethical implications and data use.
What is AI and what is it used for?
AI allows computers to learn and solve problems in ways that can seem human.Computers cannot think, empathise or reason.However, scientists have developed systems that can perform tasks which usually require human intelligence, trying to replicate how people acquire and use knowledge.AI programmes can process large amounts of data, identify patterns and follow detailed instructions about what to do with that information.
This could be trying to anticipate what product an online shopper might buy, based on previous purchases, in order to recommend items.The technology is also behind voice-controlled virtual assistants like Apple's Siri and Amazon's Alexa, and is being used to develop systems for self-driving cars.AI also helps social platforms like Facebook, TikTok and X decide what posts to show users. Streaming services Spotify and Deezer use AI to suggest music.Scientists are also using AI as a way to help spot cancers, speed up diagnoses and identify new medicines.Computer vision, a form of AI that enables computers to detect objects or people in images, is being used by radiographers to help them review X-ray results.A simple guide to help you understand AIFive things you really need to know about AI
What is generative AI, and how do apps like ChatGPT and Meta AI work?
Generative AI is used to create new content which may seem like it has been made by a human.It does this by learning from vast quantities of existing data such as online text and images.ChatGPT and Chinese rival DeepSeek's chatbot are popular generative AI tools that can be used to generate text, images, code and more material.Google's Gemini or Meta AI can similarly hold text conversations with users.Some, like Midjourney or Veo 3, are dedicated to creating images or video from simple text prompts.
Generative AI can also be used to make high-quality music.Songs mimicking the style or sound of famous musicians have gone viral, sometimes leaving fans confused about their authenticity.
Why is AI controversial?
While acknowledging AI's potential, some experts are worried about the implications of its rapid growth.The International Monetary Fund (IMF) has warned AI could affect nearly 40% of jobs, and worsen financial inequality.Prof Geoffrey Hinton, a computer scientist regarded as one of the "godfathers" of AI development, has expressed concern that powerful AI systems could even make humans extinct - a fear dismissed by his fellow "AI godfather", Yann LeCun.Critics also highlight the tech's potential to reproduce biased information, or discriminate against some social groups.This is because much of the data used to train AI comes from public material, including social media posts or comments, which can reflect biases such as sexism or racism.Facebook apology as AI labels black men 'primates'Twitter finds racial bias in image-cropping AIAnd while AI programmes are growing more adept, they are still prone to errors. Generative AI systems are known for their ability to "hallucinate" and assert falsehoods as fact.Apple halted a new AI feature in January after it incorrectly summarised news app notifications.The BBC complained about the feature after Apple's AI falsely told readers that Luigi Mangione - the man accused of killing UnitedHealthcare CEO Brian Thompson - had shot himself.Google has also faced criticism over inaccurate answers produced by its AI search overviews.This has added to concerns about the use of AI in schools and workplaces, where it is increasingly used to help summarise texts, write emails or essays and solve bugs in code.There are worries about students using AI technology to "cheat" on assignments, or employees "smuggling" it into work.Writers, musicians and artists have also pushed back against the technology, accusing AI developers of using their work to train systems without consent or compensation.
Thousands of creators - including Abba singer-songwriter Björn Ulvaeus, writers Ian Rankin and Joanne Harris and actress Julianne Moore - signed a statement in October 2024 calling AI a "major, unjust threat" to their livelihoods.Billie Eilish and Nicki Minaj want stop to 'predatory' music AIAI-written book shows why the tech 'terrifies' creatives
How does AI impact the environment?
It is not clear how much energy AI systems use, but some researchers estimate the industry as a whole could soon consume as much as the Netherlands.Creating the powerful computer chips needed to run AI programmes also takes lots of power and water.Demand for generative AI services has meant an increase in the number of data centres.These huge halls - housing thousands of racks of computer servers - use substantial amounts of energy and require large volumes of water to keep them cool.Some large tech companies have invested in ways to reduce or reuse the water needed, or have opted for alternative methods such as air-cooling.However, some experts and activists fear that AI will worsen water supply problems.
The BBC was told in February that government plans to make the UK a "world leader" in AI could put already stretched supplies of drinking water under strain.In September 2024, Google said it would reconsider proposals for a data centre in Chile, which has struggled with drought.Electricity grids creak as AI demands soar
Are there laws governing AI?
Some governments have already introduced rules governing how AI operates.The EU's Artificial Intelligence Act places controls on high risk systems used in areas such as education, healthcare, law enforcement or elections. It bans some AI use altogether.Generative AI developers in China are required to safeguard citizens' data, and promote transparency and accuracy of information. But they are also bound by the country's strict censorship laws.In the UK, Prime Minister Sir Keir Starmer has said the government "will test and understand AI before we regulate it".Both the UK and US have AI Safety Institutes that aim to identify risks and evaluate advanced AI models.In 2024 the two countries signed an agreement to collaborate on developing "robust" AI testing methods.However, in February 2025, neither country signed an international AI declaration which pledged an open, inclusive and sustainable approach to the technology.Several countries including the UK are also clamping down on use of AI systems to create deepfake nude imagery and child sexual abuse material.Man who made 'depraved' child images with AI jailedInside the deepfake porn crisis engulfing Korean schools
Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why a young Aussie has rejected a $1billion offer from Mark Zuckerberg
Why a young Aussie has rejected a $1billion offer from Mark Zuckerberg

Daily Mail​

time44 minutes ago

  • Daily Mail​

Why a young Aussie has rejected a $1billion offer from Mark Zuckerberg

An Australian artificial intelligence expert has reportedly turned down a staggering billion-dollar offer from Mark Zuckerberg 's Meta. Andrew Tulloch, a University of Sydney graduate who grew up in Perth, spent more than a decade working at Facebook 's parent company before joining rival OpenAI. In February, Tulloch co-founded AI start-up Thinking Machines Lab with former OpenAI chief technology officer Mira Murati. The company is now reportedly valued at US$12 billion (A$18.5 billion). According to the Wall Street Journal, Zuckerberg tried to buy Thinking Machines Lab earlier this year, but Murati rejected his offer. Meta's CEO then attempted to lure the company's top talent, including Tulloch. Tulloch was allegedly offered a US$1 billion (A$1.55 billion) pay package spread over six years, with the potential for even more through bonuses and stock performance. However, the Perth-born 'genius' turned the offer down. Meta later told the Journal the reported US$1 billion figure was 'inaccurate and ridiculous.' Mr Tulloch moved to the US in 2012 and spent 11 years at Facebook's AI company, where he rose to the role of distinguished engineer. Mike Vernal, a former Facebook executive who worked with Mr Tulloch, said: 'He was definitely known as an extreme genius.' In 2023, he moved to OpenAI, the research organisation behind ChatGPT, before joining former colleagues in forming Thinking Machines Lab this year. The start-up cites its mission of making 'AI systems more widely understood, customizable and generally capable'. Mr Tulloch was a vice captain at Christ Church Grammar in Claremont, Western Australia. He achieved an ATAR of 99.95 in 2007 before graduating with first class honours and the university medal in mathematics at university in 2011. He graduated with the highest GPA in the Faculty of Science. The Australian worked at Goldman Sachs as a quant while studying at the University of Cambridge. He completed a masters in mathematical statistics and machine learning before embarking on his career in AI. Zuckerberg has a history of attempting to poach rival companies' employees. OpenAI boss Sam Altman revealed in June Meta had offered US$100million bonuses ($155million) to his staff in an unsuccessful bid to convince talent to switch teams. 'I'm really happy that at least so far none of our best people have decided to take them up on that,' he said.

MIT's Light-Only AI Chip Could Supercharge Electric Vehicles
MIT's Light-Only AI Chip Could Supercharge Electric Vehicles

Auto Blog

time4 hours ago

  • Auto Blog

MIT's Light-Only AI Chip Could Supercharge Electric Vehicles

By signing up I agree to the Terms of Use and acknowledge that I have read the Privacy Policy . You may unsubscribe from email communication at anytime. Autoblog brings you car news; expert reviews and exciting pictures and video. Research and compare vehicles, too. Chevy's Toyota Tacoma contender is getting pricier, but 2025 models offer the same powertrain and more value. View post: 2026 Chevrolet Colorado Updates Are Coming—But Should You Buy the 2025 Instead? View post: Amazon Is Selling a 'Secure' $14 Car Phone Holder for 43% Off, and Shoppers Say It's 'By Far One of the Best' The only correct way to modify something as heinous as the XM is to lean into its absurdity. Imagine an EV that doesn't need a bulky cooling system for its brain. Imagine your car processing LiDAR data, high-res camera feeds, and driver monitoring in real time—without sipping a ton of juice from the battery. MIT's new light-only AI chip, which swaps electrons for photons, might just pull this off. This isn't a minor tweak in chip design. It's a potential industry earthquake. The chip runs on photons, meaning it processes data with light instead of electricity. Sounds like sci-fi, right? But the benefits are huge: 90 percent less power consumption, almost no heat generation, and computations that happen at, well, the speed of light. For EVs, which fight tooth and nail for every mile of range, this could be the difference between 300 miles and 350 miles on a single charge. Why This Matters for EVs Every modern EV has a digital nervous system that sucks energy. The AI stack — everything from lane-keeping assist to voice commands — relies on energy-hungry chips like NVIDIA's Drive platform. Even when the car's parked, these processors run diagnostics and software updates, quietly draining the battery. By providing your email address, you agree that it may be used pursuant to Arena Group's Privacy Policy. Swap those power-hogging silicon chips with something that barely sips energy? You free up power for the motor, heating, and air conditioning. Suddenly, EVs can be smarter and go further without strapping on a bigger, heavier battery pack. And it's not just about range. The photonic chip's speed could slash latency in autonomous driving. Maybe this is exactly what Tesla's Autopilot needs to work without killing people. Imagine your car spotting a cyclist darting across the road and responding faster than your reflexes. That's not marketing hype — that's rather some life-saving tech. The Autonomous Game-Changer Self-driving cars rely on billions of calculations per second. Traditional GPUs do the job, but they're power-hungry beasts that require liquid cooling and complex thermal management. A photonic AI chip can handle these calculations with barely any heat output, which means lighter systems, lower costs, and fewer points of failure. Autoblog Newsletter Autoblog brings you car news; expert reviews and exciting pictures and video. Research and compare vehicles, too. Sign up or sign in with Google Facebook Microsoft Apple By signing up I agree to the Terms of Use and acknowledge that I have read the Privacy Policy . You may unsubscribe from email communication at anytime. Tesla, Waymo, and every other company chasing autonomy would kill for this kind of efficiency. Even if photonic chips start as co-processors — handling vision or sensor fusion — they'll free up traditional CPUs and GPUs to handle the rest with more breathing room. The Catch There's always a catch. These chips are still in the lab, and automotive-grade hardware certification isn't exactly speedy. Cars need chips that can survive scorching heat, freezing temperatures, and years of vibration. Expect a timeline closer to 2027 before you see a production EV using this tech. Still, the writing's on the wall. The next wave of EV innovation won't just be about battery chemistry or charging speed. It'll be about making the brains of the car just as efficient as its brawn. This MIT breakthrough is a reminder that the EV arms race is far from over. Today, it's all about range anxiety. Tomorrow, it will be about how fast your car's AI can think without stealing electrons from the wheels. About the Author Brian Iselin View Profile

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

time4 hours ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store