logo
ANDREW NEIL: The Industrial Revolution hit working class jobs but the rise of AI will decimate middle-class employment

ANDREW NEIL: The Industrial Revolution hit working class jobs but the rise of AI will decimate middle-class employment

Daily Mail​2 days ago
I have a friend in storage. Not the most glamorous business, to be sure, but lucrative. It's made him rich and his business is still expanding as he creates new storage facilities for companies and individuals across the country.
It's a well-run business but in the past year he's discovered how artificial intelligence, or AI, can make his companies even more productive and profitable.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Musk open to merger between his company xAI and Apple
Musk open to merger between his company xAI and Apple

Daily Mail​

timean hour ago

  • Daily Mail​

Musk open to merger between his company xAI and Apple

Elon Musk has been openly hinting at a historic merger in the business world, suggesting that his company xAI should partner with tech giant Apple. Musk's company is the corporate face of his popular AI chatbot Grok, which functions similarly to competitors like ChatGPT, Claude, Gemini, and Copilot. Meanwhile, Apple has struggled to bring its own AI programs to consumers, notably delaying improvements to the Siri voice assistant until 2026. Venture capitalists started openly speculating this month that Musk and Apple make the perfect power couple in the AI world, with xAI bringing Grok to even more people using iPhones through this proposed partnership. On the All-In Podcast, investor Gavin Baker called xAI's Grok4 'the best product' in terms of AI chatbots right now, but added that 'the best product doesn't always win in technology.' 'I think there is solid industrial logic for a partnership. You could have Apple Grok, Safe Grok, whatever you want to call it,' said Baker, the Chief Investment Officer of Atreides Management LP, in a video posted to X on July 19. Musk quickly replied to the comments, saying 'Interesting idea.' The billionaire then added 'I hope so!' while responding to another post suggesting that Apple partnering with xAI was a better option than competitors like Anthropic. A partnership between the two companies could integrate xAI's Grok chatbot into Apple's devices, such as iPhones, iPads, and Macs, potentially replacing or augmenting Siri. A relationship between Musk's AI team and the $3.1 trillion Apple could also lead to smarter, more accurate AI assistants, addressing Apple's ongoing issues with AI development. Grok launched in 2023 as Musk's alternative to other chatbot which had sparked controversy for provided allegedly biased answers and citing information that had been made up. xAI has said that Grok is "designed to answer questions with a bit of wit," and the program has generally drawn widespread praise for its quick and accurate answers to prompts. Just just weeks ago, however, Grok 4 was engulfed in controversy for repeating far-right hate speech and white nationalist talking points about politics, race, and recent news events. Multiple users reported on July 8 and July 9 that Grok echoed anti-Semitic conspiracy theories, including claims that Jewish people controlled Hollywood, promoted hatred toward White people, and should be imprisoned in camps. In a post on X, xAI replied to these concerns: 'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,' the company added. Baker added that the deal Musk has been infatuated with would benefit xAI's reach significantly as well, since OpenAI's ChatGPT is currently used by nearly 800 million weekly active users, according to Demandsage. 'There's been a lot of news about Apple thinking about buying Perplexity or Mistral, but that's just a Band-Aid. Those companies don't get Apple what they need,' Baker said. To the investor's point, Perplexity AI is a search-engine-style AI company known for information retrieval and fact-finding tasks. It's currently valued at $18 billion. Mistral AI is a French AI firm valued at roughly $6.2 billion that's focused on easy-to-use, open-source language models. They've worked with partners like Cisco to help with tasks like research and automation. On the other hand, xAI and its Grok chatbot stand out with a current valuation of up to $200 billion and a distribution reaching 35.1 million monthly active users. Baker explained that 'xAI and Apple are natural partners,' especially after OpenAI made a multi-billion-dollar deal to create new devices that use their AI technology without relying on the iPhone. In May, OpenAI bought former Apple designer Jony Ive's hardware startup for a reported $6.5 billion. That deal brought Ive on as the AI company's new creative head, with the vision of building specialized gadgets that can use generative AI and ChatGPT without needing a smartphone or computer. While a deal between xAI and Apple is still only speculation, Musk recently turned heads by announcing that xAI was working on a new project called 'Baby Grok' which would be a new app designed to provide 'kid-friendly content.'

Graduates change career plans as AI upends jobs market
Graduates change career plans as AI upends jobs market

Telegraph

time3 hours ago

  • Telegraph

Graduates change career plans as AI upends jobs market

One in 10 graduates have already changed their career plans over fears that artificial intelligence (AI) will upend their job prospects. University leavers seeking a career in industries such as graphic design, coding, film and art are particularly concerned about the impact of AI on their prospects, fearing the rapidly developing technology will render their jobs obsolete. It comes amid wider concerns about Britain's cooling labour market, with companies slashing hiring and increasing lay-offs in recent months as they battle the Chancellor's recent National Insurance raid and increase to the minimum wage. According to the survey of 4,072 respondents by the university and career advisers, Prospects, 10pc said they had changed their career plans because of AI, rising to 11pc among graduates. Chris Rea from Prospects said while many graduates were changing their career plans because of the negative impact of AI, others were considering working in a new industry because of the opportunities the technology has created. The main reason given by students and graduates for changing their career path was owing to concerns that their job would become obsolete. University leavers also singled out that opportunities in the creative industry were at risk owing the rapid advancement of AI. Jeremy Swan, from the Association of Graduate Careers Advisory Services, said technological advancements meant there is more pressure on graduates to seek out jobs in industries where they cannot be replicated by AI. He added: 'I think it's about re-framing people's thinking, so that they can see there are opportunities out there that look slightly different than what they're used to.' Mr Swan said AI has left many students and graduates feeling 'really uncertain about where they stand'. The number of new entry-level vacancies has fallen 32pc since Chat GPT was released in November 2022, according to figures by Adzuna, the job search platform. Mr Swan added: 'There's a lot of uncertainty that's come off the back of AI, people worrying how it's going to affect their chosen career paths, and we would just say this is where decent career support matters more than ever.' According to LinkedIn data, hiring in the UK fell 6.7pc in June compared with a month earlier, following a 3.9pc rise in May. Official data also showed that unemployment jumped to a four-year high of 4.7pc in the three months to May. Last month, the Bank of England Governor, Andrew Bailey, said larger interest rate cuts will be required if the jobs market starts to show clearer signs of a slowdown. City traders are expecting the Bank's Monetary Policy Committee to cut interest rates from 4.25pc to 4pc when they meet on Thursday. University leavers are facing an increasingly challenging labour market this year, with employers cutting back on graduate recruitment. According to data published by Adzuna in May, graduate job listings plunged by almost 23pc in the year to April as soaring taxes prompt businesses to cut back hiring for entry level positions. Increases to the national living wage, which increased 6.7pc to £12.21 per hour in April, mean many graduate schemes are only offering salaries in line with the minimum wage. A full-time worker on the UK's lowest salary now earns the equivalent of £25,500 annually. KPMG is one of the major employers to cut its recruitment scheme, with just 942 graduates and school leavers hired last year compared with 1,399 in 2023. It expects to hire around 1,000 graduates and school leavers this year. As graduates face increasing competition for entry level roles and a cooling job market, many are turning to AI to help draft and tailor their job applications. The survey by Prospects also found that 43pc had used AI to edit or draft a cover letter, while 26pc had used AI to answer questions in job application forms. However, Mr Swan believes students and graduates are under-reporting their use in AI. He said that while using AI can be helpful in getting students started with their CV or cover, students needed to make sure they were using 'these tools in an ethical way'.

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot
Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

The Guardian

time3 hours ago

  • The Guardian

Using Generative AI for therapy might feel like a lifeline – but there's danger in seeking certainty in a chatbot

Tran* sat across from me, phone in hand, scrolling. 'I just wanted to make sure I didn't say the wrong thing,' he explained, referring to a recent disagreement with his partner. 'So I asked ChatGPT what I should say.' He read the chatbot-generated message aloud. It was articulate, logical and composed – almost too composed. It didn't sound like Tran. And it definitely didn't sound like someone in the middle of a complex, emotional conversation about the future of a long-term relationship. It also did not mention anywhere some of Tran's contributing behaviours to the relationship strain that Tran and I had been discussing. Like many others I've seen in therapy recently, Tran had turned to AI in a moment of crisis. Under immense pressure at work and facing uncertainty in his relationship, he'd downloaded ChatGPT on his phone 'just to try it out'. What began as a curiosity soon became a daily habit, asking questions, drafting texts, and even seeking reassurance about his own feelings. The more Tran used it, the more he began to second-guess himself in social situations, turning to the model for guidance before responding to colleagues or loved ones. He felt strangely comforted, like 'no one knew me better'. His partner, on the other hand, began to feel like she was talking to someone else entirely. ChatGPT and other generative AI models present a tempting accessory, or even alternative, to traditional therapy. They're often free, available 24/7 and can offer customised, detailed responses in real time. When you're overwhelmed, sleepless and desperate to make sense of a messy situation, typing a few sentences into a chatbot and getting back what feels like sage advice can be very appealing. But as a psychologist, I'm growing increasingly concerned about what I'm seeing in the clinic; a silent shift in how people are processing distress and a growing reliance on artificial intelligence in place of human connection and therapeutic support. AI might feel like a lifeline when services are overstretched – and make no mistake, services are overstretched. Globally, in 2019 one in eight people were living with a mental illness and we face a dire shortage of trained mental health professionals. In Australia, there has been a growing mental health workforce shortage that is impacting access to trained professionals. Clinician time is one of the scarcest resources in healthcare. It's understandable (even expected) that people are looking for alternatives. Turning to a chatbot for emotional support isn't without risk however, especially when the lines between advice, reassurance and emotional dependence become blurred. Many psychologists, myself included, now encourage clients to build boundaries around their use of ChatGPT and similar tools. Its seductive 'always-on' availability and friendly tone can unintentionally reinforce unhelpful behaviours, especially for people with anxiety, OCD or trauma-related issues. Reassurance-seeking, for example, is a key feature in OCD and ChatGPT, by design, provides reassurance in abundance. It never asks why you're asking again. It never challenges avoidance. It never says, 'let's sit with this feeling for a moment, and practice the skills we have been working on'. Tran often reworded prompts until the model gave him an answer that 'felt right'. But this constant tailoring meant he wasn't just seeking clarity; he was outsourcing emotional processing. Instead of learning to tolerate distress or explore nuance, he sought AI-generated certainty. Over time, that made it harder for him to trust his own instincts. Beyond psychological concerns, there are real ethical issues. Information shared with ChatGPT isn't protected by the same confidentiality standards as registered Ahpra professionals. Although OpenAI states that data from users is not used to train its models unless permission is given, the sheer volume of fine print in user agreements often goes unread. Users may not realise how their inputs can be stored, analysed and potentially reused. There's also the risk of harmful or false information. These large language models are autoregressive; they predict the next word based on previous patterns. This probabilistic process can lead to 'hallucinations', confident, polished answers that are completely untrue. AI also reflects the biases embedded in its training data. Research shows that generative models can perpetuate and even amplify gender, racial and disability-based stereotypes – not intentionally, but unavoidably. Human therapists also possess clinical skills; we notice when a client's voice trembles, or when their silence might say more than words. This isn't to say AI can't have a place. Like many technological advancements before it, generative AI is here to stay. It may offer useful summaries, psycho-educational content or even support in regions where access to mental health professionals is severely limited. But it must be used carefully, and never as a replacement for relational, regulated care. Tran wasn't wrong to seek help. His instincts to make sense of distress and to communicate more thoughtfully were logical. However, leaning so heavily on to AI meant that his skill development suffered. His partner began noticing a strange detachment in his messages. 'It just didn't sound like you', she later told him. It turned out: it wasn't. She also became frustrated about the lack of accountability in his correspondence to her and this caused more relational friction and communication issues between them. As Tran and I worked together in therapy, we explored what led him to seek certainty in a chatbot. We unpacked his fears of disappointing others, his discomfort with emotional conflict and his belief that perfect words might prevent pain. Over time, he began writing his own responses, sometimes messy, sometimes unsure, but authentically his. Good therapy is relational. It thrives on imperfection, nuance and slow discovery. It involves pattern recognition, accountability and the kind of discomfort that leads to lasting change. A therapist doesn't just answer; they ask and they challenge. They hold space, offer reflection and walk with you, while also offering up an uncomfortable mirror. For Tran, the shift wasn't just about limiting his use of ChatGPT; it was about reclaiming his own voice. In the end, he didn't need a perfect response. He needed to believe that he could navigate life's messiness with curiosity, courage and care – not perfect scripts. Name and identifying details changed to protect client confidentiality Carly Dober is a psychologist living and working in Naarm/Melbourne In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store