logo
#

Latest news with #TuringTest

AI is no longer artificial
AI is no longer artificial

AllAfrica

time5 days ago

  • AllAfrica

AI is no longer artificial

For centuries, the mirror has served a simple purpose: to reflect our image. It shows our form, lets us adjust our appearance, and studies our expressions. But it doesn't know us. A mirror is a passive, optical simulation – a reflection of form, not essence. You can stare into it for hours, yet it will never reveal your thoughts or identity. It's a surface, not substance. The more we gaze into mirrors, the more we focus on appearance. In that way, mirrors become feedback loops. First we create the reflection, then the reflection begins to shape us. Today's mirrors are digital. Social media are reflecting us, but in a curated, filtered and performative way. They don't just show who we are – they show who we want to be, or pretend to be. As philosopher Jean Baudrillard warned in his theory of hyperreality, representations become more real than reality itself. We no longer live in the moment; we live for how the moment looks on screen. In 2023, a Nature Human Behaviour study revealed that 64% of users felt 'more like themselves online' than in real life. That's not a connection – it's self -distortion. Social media are not a window to the world; They are a mirror of desire. Social media don't merely reflect life. They replace it with a version that's more symmetrical, more colorful, more shareable than reality. It's a simulated reality. Humans love simulated reality – whether it's the mirror, social media or video games – more than reality. Simulation doesn't have to be digital. It can be psychological or cultural – any representation that imitates reality but isn't reality itself. If mirrors simulate our appearance and social media simulates our persona, then artificial intelligence now simulates our consciousness. Tools like ChatGPT don't invent humanity – they re-present it. Trained on billions of words, they echo our thoughts, emotions, contradictions and dreams. When we speak to AI, we are not talking to something alien – we're speaking to a refined version of ourselves. AI becomes not just a mirror, but a hall of mirrors. We've crossed into an era where the tools we've created don't just assist us – they reflect us back. AI finishes our sentences, answers our questions and creates our art. But as its responses grow more fluid, the line between mimicry and sentience begins to blur. As technology evolves, we're losing our compass. Intelligence, once the proudest marker of human uniqueness, no longer belongs to us alone. We have no definitive metric to separate simulated thought from real consciousness. The Turing Test has been outpaced. As AI models mimic human reasoning, debate philosophy, write poetry and simulate empathy, we're left with a haunting question: What if mimicry becomes indistinguishable from sentience or from reality ? Today AI doesn't just solve tasks – it simulates emotional presence. Tools now generate voice, video and conversation with uncanny intimacy. In a poignant example, a woman used ChatGPT to simulate conversations with her deceased mother to find solace. Replika, a chatbot app, has users reporting romantic connections with their avatars. Sixty percent of paying users claim to be in love with theirs. Unlike humans, AI doesn't judge, tire or leave. It delivers perfect emotional labor – a task no human has ever managed to sustain. But as it simulates love, grief and care, we must ask: When does imitation become reality? Or when do people start loving imitation more than reality. This is the defining crisis of our century: What makes us human if we are no longer the only beings who reflect, remember or respond with empathy? In capitalism, we're valued for productivity. AI will surpass us. In relationships, humans are flawed. AI is endlessly understanding. In knowledge, we're fragmented. AI is total. Ironically, AI might push us to rediscover what makes us human. That's not perfection but fragility. Our flaws and limitations may be our last claim to uniqueness. But even that is being challenged. We are entering an ethical reckoning. What if, in the near future, the elderly find solace in digital companions rather than the presence of family? What if the children form attachments to voices that were never born – like Alexa or Google Home? If an AI listens better than a friend, what is the meaning of friendship? We are heading into an era where a line must be drawn between artificial intelligence and artificial sentience because, if we don't, the real danger won't be that machines become human – but that we forget what being human even means.

Why the Turing Test is still the best benchmark to assess AI
Why the Turing Test is still the best benchmark to assess AI

Gulf Business

time6 days ago

  • Gulf Business

Why the Turing Test is still the best benchmark to assess AI

Image: Supplied 'A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.' Alan Turing We have come a long way since the beginning of modern AI in the 1950s and especially in the last few years. I believe we are now at the tipping point where AI is changing the way we do research and changing the way industry interacts with these technologies. Politics and society are having to adjust and make sure that AI is used in an ethical and secure way, and also that privacy concerns are addressed. Whilst AI has a lot of potential, there are still a number of issues and concerns. If we manage to address these, we can look ahead to good things from AI. Alan Turing (1912 – 1954) was a British mathematician and computer scientist and he's also widely known as the father of theoretical computer science and AI. He made a number of notable contributions, for instance, he introduced the concepts of a theoretical computing machine, also known as the Turing machine, which laid the foundation for what is now known as modern computer science. He worked on the design of early computers with the National Physics Laboratory and also later at the University of Manchester, where I'm based. He undertook pioneering work and this continues to be influential in contemporary computer science. He also developed the Turing test that measures the ability of a machine to exhibit intelligent behaviour that's equivalent or indistinguishable from that of a human. The Turing Test: Why its relevant The Turing test is still used today. Turing introduced it as a test for what's known as the imitation game in which a human interrogator interacts with two hidden entities — one human and the other a machine — through text-based communication, similar to ChatGPT. The interrogator cannot see or hear the participants and must rely just on the text conversation to make a judgment on whether it's a machine or a human. The objective for the machine is to generate responses that are indistinguishable from those of a human. The human participant aims to convince the interrogator of her/his humanity. If the interrogator cannot reliably distinguish between a machine and a human, then the machine is said to have passed the Turing test. It sounds very simple but it's an important test because it has become a classic benchmark for assessing AI. But there are also criticisms and limitations to the test. As we mark Alan Turing Day 2024, I can say that AI is moving closer to passing the Turing test – but we're not quite there yet. A recent paper stated that ChatGPT had passed the Turing test. ChatGPT is a natural language processing model and generates responses to questions that we pose that look like responses from a human. Some people would say ChatGPT has passed the Turing test and certainly for short conversations, ChatGPT is doing quite a good job. But as you have a longer conversation with ChatGPT, you notice there are some flaws and weaknesses. So, I think ChatGPT is probably the closest we get to passing the Turing test, at the moment. Many researchers and companies are working on improving the current version of ChatGPT and I would like to see that the machine understands what it produces. At the moment, ChatGPT produces a sequence of words that are suitable to address a particular query but it doesn't understand the meaning of these words. If ChatGPT understands the true meaning of a sentence – and that is done by contextualising a particular response or query — I think we are then in a position to say, yes, it has passed the Turing test. I would have hoped to pass this stage by now but I hope we will reach this point in a few years' time, perhaps around 2030. At the University of Manchester, we are working on various aspects of AI in healthcare — getting better, cheaper or quicker treatment is in the interest of society. It starts off with drug discovery. Can we find drugs that are more potent than drugs and have fewer side effects and ideally are cheaper to manufacture than the drugs currently available? We use AI to help guide us through the search space of different drug combinations. And the AI tells us, for example, which drugs we should combine and at which dose. We also work with the UK National Health Service and have come up with fairer reimbursement schemes for hospitals. In one case, we use what's called sequential decision making. In the other one, we use techniques that are based on decision trees. So, we use different methods and look at different applications of AI within healthcare. A particular area of cyber security that I'm working on is secure source code – it's the way we tell a computer what to do and is one of the fundamental levels we humans interact with a computer. If the source code (a sequence of instructions) is poor quality, then it can open up security vulnerabilities which could be exploited by hackers. We use verification techniques combined with AI to scan through source code, identify security issues of different types, and then fix them. We have shown that by doing that, we increase the quality of code and improve the resilience of a piece of software. We generate a lot of code and we want to make sure the code is safe, especially if for a business in a high stakes sector, such as healthcare, defence or finance. AI in sport There's a lot of scope and potential for AI in creativity and sport. In football, we have data about match action – where the ball is, who has the ball, and the positioning of the players. It's really big data and we can analyse it to refine a strategy when playing a particular opponent, by looking at past performance and player style, and use the data to adjust our strategy. This would be very tough without AI because of the sheer amount and complexity of the data. We are also looking at music education and helping people learn an instrument better by creating virtual music teachers. We can use AI combined with other technologies, such as virtual reality and augmented reality, to project a tutor. If you wear VR goggles, you can actually interact with the tutor. This is quite revolutionary and potentially opens up music to everyone on the planet. At the moment we're at the stage where AI is exceptionally good in doing specific tasks and we are making very good progress on general AI — AI behaving in a similar way to humans and that we can interact with. This is a game changer made possible by ChatGPT and other examples. This technology is being used by industry for completely new business ideas we haven't even thought of. A vision and strategy for AI is crucial. The UAE National Strategy for AI 2031 is a very good example of an ambitious vision covering education and reskilling, investment in research but also in the translation of research into practice. The strategy even looks at ethical AI development, making sure the AI is used ethically, securely and that privacy concerns are mitigated. I think the strategy has all the components that are needed to be successful and we can all learn a lot from this approach. The writer is the professor of Applied Artificial Intelligence and Associate Dean for Business Engagement, Civic & Cultural Partnerships (Humanities) at Alliance Manchester Business School, Read

Can AI Replace Therapists? And More Importantly, Should It?
Can AI Replace Therapists? And More Importantly, Should It?

Vogue

time10-06-2025

  • Science
  • Vogue

Can AI Replace Therapists? And More Importantly, Should It?

'Can machines think?' It's a question that mathematician Alan Turing first posed in 1950 and became the cornerstone of his experiment, known as 'The Turing Test,' in which a human and a machine are presented with the same dilemma. If the machine could imitate human behavior, it was considered intelligent, something Turing predicted would increasingly happen in the decades to come. He didn't have to wait long: by the 1960s, MIT professor Joseph Weizenbaum had introduced the world to ELIZA, the first chatbot and forebearer of modern AI—and ELIZA was programmed to imitate a psychotherapist. But Turing's question feels more prescient than ever now, as we find ourselves at a disconcerting crossroads with technology advancing and extending its reach into the various touchpoints of our lives at a rate so quick that the guardrails haven't yet been created to corral it. In 2025, Turing's initial question has evolved into something different though: Can machines feel or understand feelings? Because, as increasing numbers of people turn toward AI in lieu of a human therapist, we are asking them to do just that. The technology has indeed come a long way since ELIZA. Now, you have options like Pi, which bills itself as 'your personal AI, designed to be supportive, smart, and there for you anytime.' Or Replika, 'which is always here to listen and talk.' There's also Woebot, and Earkick, and Wysa, and Therabot, the list goes on if you're just looking for someone—well, thing—to talk to. Some of these chatbots have been developed with the help of mental health professionals, and more importantly some haven't, and it's hard for the average client to discern which is which. One reason that more people are turning to AI for mental health help is cost—sessions with a human therapist (whether virtual or in-person) can be pricey and are often either not covered by insurance or require a lot of extra effort to navigate whether they will be covered. For younger generations, recession-proofing their budget has meant ditching a real therapist for a bot stand-in. Then there's the lingering stigma around seeking out mental health help. 'Many families, whether it be because of culture or religion or just ingrained beliefs, are passing down stigmatized views about therapy and mental health through generations,' says Brigid Donahue, a licensed clinical social worker and EMDR therapist in L.A. And there's the convenience factor: this new wave of mental health tools are available on your schedule (in fact, that's Woebot's tagline). 'Your AI therapist will never go on vacation, never call out or cancel a session, they're available 24/7,' says Vienna Pharaon, a marriage and family therapist and author of The Origins of You. 'It creates this perfect experience where you'll never be let down. 'But the truth is you don't heal through perfection.' That healing often comes with the ruptures, friction, and tension of a therapy session that isn't automated. 'When you eliminate imperfection and human flaws and the natural disappointments that will occur, we really rob clients of the experience of moving through challenges and conflicts,' says Pharaon. The so-called imperfections of a human therapist can actually be reassuring for many clients. 'For anyone who grew up with the intense pressure to be perfect, 'mistakes' made by a therapist can actually be corrective,' adds Donahue.

Has AI exceeded human levels of intelligence? The answer is more complicated than you might think
Has AI exceeded human levels of intelligence? The answer is more complicated than you might think

Tom's Guide

time17-05-2025

  • Tom's Guide

Has AI exceeded human levels of intelligence? The answer is more complicated than you might think

It's no wonder that many of us find the idea of artificial general intelligence (AGI) mildly terrifying. Hollywood script writers have long enjoyed stretching the idea of self-aware computers to their most unsettling extremes. If you've watched the likes of '2001: A Space Odyssey', the 'Terminator' franchise or 'Ex Machina', then you've already had a flavor of where AGI could take us — and it rarely ends well. While you certainly shouldn't believe everything you see at the movies, the concept of AGI is a hot topic of discussion for computer scientists, theorists and philosophers. Is AGI's reputation as the harbinger of inevitable apocalypse a fair one? And how long have we got until AGI becomes a genuine concern? IBM gives one of the more succinct and straightforward definitions of AGI: 'Artificial general intelligence is a hypothetical stage in the development of machine learning in which an artificial intelligence system can match or exceed the cognitive abilities of human beings across any task'. If that sounds a bit like the Turing Test, it's not dissimilar. But while Alan Turing's famous game challenges participants to differentiate humans and computers from the text-based responses, true AGI goes beyond wanting to merely mimic human intelligence. And although generative AI models like ChatGPT and Google Gemini are already smart enough to hold very convincing conversations, they do so by using their 'training' to predict what the next best word in the sentence should be. AGI, on the other hand, seeks deeper, self-directed comprehension. It effectively has its own independent consciousness, that is able to autonomously learn, understand, communicate and form goals without the guiding hand of a human. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. To level up from the AI we have now, AGI needs to demonstrate a combination of physical and intellectual traits that we'd normally associate with organic lifeforms. Intuitive visual and auditory perception, for example, that goes beyond basic identification that tools like Google Lens can already achieve; creativity that isn't merely an aggregated regurgitation of what has gone before; problem solving that improves upon learned diagnostics to incorporate a form of common sense. Only artificial intelligence that can demonstrate independent reasoning, learning and empathy can be regarded as true AGI. The word 'hypothetical' in IBM's above definition of AGI may sound disappointing to AI advocates and reassuring to those fearing the rise of our digital overlords. But AGI's fruition is seen by most commentators as a matter of when rather than if. Indeed, some researchers think that it has already arrived. A Google engineer (since fired) claimed in 2022 that the company's LaMDA chatbot understood its own personhood and was indistinguishable from 'a 7-year-old, 8-year-old kid that happens to know physics'. And a 2025 study in which OpenAI's GPT-4.5 is claimed to have passed the Turing Test is seen as further proof. But most experts see this view as having jumped the gun on the basis that these models have only mastered the game of imitation, rather than developed their own general intelligence. Ray Kurzweil predicts that AGI is just around the corner. The trusted academic, who has a track record for anticipating leaps forward in artificial intelligence, foretold its advent in the 2030s in his 2005 book 'The Singularity Is Near'. He subsequently doubled down on this prediction in 2024's 'The Singularity Is Nearer', stating that artificial intelligence will 'reach human levels by around 2029' and will go on to 'multiply the human biological machine intelligence of our civilization a billion-fold'. Kurzweil is more optimistic than most. In 2022, the 'Expert Survey on Progress in AI' received responses from 738 machine learning researchers. When asked for a forecast of when there would be a 50% chance of high-level machine intelligence (which shares many of the same traits as AGI), the average prediction was 2059. The emergence in the second half of the 21st century is a timeline shared by many moderate estimators. For others, however, the notion of computers reaching a human-like level of sentience is the domain only of the science fiction genre — or, at best, way beyond our lifetimes. The short answer is no. Regardless of whether they already pass the Turing Test, how good ChatGPT is at helping you through a panic attack, or how smart Anthropic's Claude is getting, the current crop of AI chatbots still fall short of the recognized requirements for AGI. But these large language models (LLMs) shouldn't be written out of AGI's story entirely. Their popularity and exponential growth in users could be a useful foundation of AGI's development, according to innovators like OpenAI co-creator Ilya Sutskever. He suggests that LLMs are a path to AGI, likening their predictive nature to a genuine understanding about the world. Co-founder of Google's DeepMind, Demis Hassabis, is another prominent AI spokesperson who sees these chatbots as a component of AGI development. Unsurprisingly, there are plenty of dissenting voices, too. Another voice from Google, François Chollet is an AI researcher and co-founder of the global ARC Prize for progress towards AGI. His view is that OpenAI has actually 'set back progress to AGI by five to 10 years', and says that 'LLMs essentially sucked the oxygen out of the room — everyone is doing LLMs'. Meta's Chief AI Scientist, Yann LeCun, agrees that LLMs are a dead end when it comes to advancements in AGI.

Can ChatGPT pass the Turing Test yet?
Can ChatGPT pass the Turing Test yet?

Yahoo

time11-05-2025

  • Science
  • Yahoo

Can ChatGPT pass the Turing Test yet?

Artificial intelligence chatbots like ChatGPT are getting a whole lot smarter, a whole lot more natural, and a whole lot more…human-like. It makes sense — humans are the ones creating the large language models that underpin AI chatbots' systems, after all. But as these tools get better at "reasoning" and mimicking human speech, are they smart enough yet to pass the Turing Test? For decades, the Turing Test has been held up as a key benchmark in machine intelligence. Now, researchers are actually putting LLMs like ChatGPT to the test. If ChatGPT can pass, the accomplishment would be a major milestone in AI development. So, can ChatGPT pass the Turing Test? According to some researchers, yes. However, the results aren't entirely definitive. The Turing Test isn't a simple pass/fail, which means the results aren't really black and white. Besides, even if ChatGPT could pass the Turing Test, that may not really tell us how 'human' an LLM really is. Let's break it down. The concept of the Turing Test is actually pretty simple. The test was originally proposed by British mathematician Alan Turing, the father of modern computer science and a hero to nerds around the world. In 1949 or 1950, he proposed the Imitation Game — a test for machine intelligence that has since been named for him. The Turing Test involves a human judge having a conversation with both a human and a machine without knowing which one is which (or who is who, if you believe in AGI). If the judge can't tell which one is the machine and which one is the human, the machine passes the Turing Test. In a research context, the test is performed many times with multiple judges. Of course, the test can't necessarily determine if a large language model is actually as smart as a human (or smarter) — just if it's able to pass for a human. Large language models, of course, do not have a brain, consciousness, or world model. They're not aware of their own existence. They also lack true opinions or beliefs. Instead, large language models are trained on massive datasets of information — books, internet articles, documents, transcripts. When text is inputted by a user, the AI model uses its "reasoning" to determine the most likely meaning and intent of the input. Then, the model generates a response. At the most basic level, LLMs are word prediction engines. Using their vast training data, they calculate probabilities for the first 'token' (usually a single word) of the response using their vocabulary. They repeat this process until a complete response is generated. That's an oversimplification, of course, but let's keep it simple: LLMs generate responses to input based on probability and statistics. So, the response of an LLM is based on mathematics, not an actual understanding of the world. So, no, LLMs don't actually think in any sense of the word. Joseph Maldonado / Mashable Composite by Rene Ramos Credit: Mashable There have been quite a few studies to determine if ChatGPT has passed the Turing test, and many of them have had positive findings. That's why some computer scientists argue that, yes, large language models like GPT-4 and GPT-4.5 can now pass the famous Turing Test. Most tests focus on OpenAI's GPT-4 model, the one that's used by most ChatGPT users. Using that model, a study from UC San Diego found that in many cases, human judges were unable to distinguish GPT-4 from a human. In the study, GPT-4 was judged to be a human 54% of the time. However, this still lagged behind actual humans, who were judged to be human 67% of the time. Then, GPT-4.5 was released, and the UC San Diego researchers performed the study again. This time, the large language model was identified as human 73% of the time, outperforming actual humans. The test also found that Meta's LLaMa-3.1-405B was able to pass the test. Other studies outside of UC San Diego have also given GPT passing grades, too. A 2024 University of Reading study of GPT-4 had the model create answers for take-home assessments for undergraduate courses. The test graders weren't told about the experiment, and they only flagged one of 33 entries. ChatGPT received above-average grades with the other 32 entries. So, are these studies definitive? Not quite. Some critics (and there are a lot of them) say these research studies aren't as impressive as they seem. That's why we aren't ready to definitively say that ChatGPT passes the Turing Test. We can say that while previous-gen LLMs like GPT-4 sometimes passed the Turing test, passing grades are becoming more common as LLMs get more advanced. And as cutting-edge models like GPT-4.5 come out, we're fast headed toward models that can easily pass the Turing Test every time. OpenAI itself certainly envisions a world in which it's impossible to tell human from AI. That's why OpenAI CEO Sam Altman has invested in a human verification project with an eyeball-scanning machine called The Orb. We decided to ask ChatGPT if it could pass the Turing Test, and it told us yes, with the same caveats we've already discussed. When we posed the question, "Can ChatGPT pass the Turing Test?" to the AI chatbot (using the 4o model), it told us, "ChatGPT can pass the Turing Test in some scenarios, but not reliably or universally." The chatbot concluded, "It might pass the Turing Test with an average user under casual conditions, but a determined and thoughtful interrogator could almost always unmask it." AI-generated image. Credit: OpenAI Some computer scientists now believe the Turing test is outdated, and that it's not all that helpful in judging large language models. Gary Marcus, an American psychologist, cognitive scientist, author, and popular AI prognosticator, summed it up best in a recent blog post, where he wrote, 'as I (and many others) have said for years, the Turing Test is a test of human gullibility, not a test of intelligence." It's also worth keeping in mind that the Turing Test is more about the perception of intelligence rather than actual intelligence. That's an important distinction. A model like ChatGPT 4o might be able to pass simply by mimicking human speech. Not only that, but whether or not a large language model passes the test will vary depending on the topic and the tester. ChatGPT could easily ape small talk, but it could struggle with conversations that require true emotional intelligence. Not only that, but modern AI systems are used for much more than chatting, especially as we head toward a world of agentic AI. None of that is to say that the Turing Test is irrelevant. It's a neat historical benchmark, and it's certainly interesting that large language models are able to pass it. But the Turing Test is hardly the gold-standard benchmark of machine intelligence. What would a better benchmark look like? That's a whole other can of worms that we'll have to save for another story. Disclosure: Ziff Davis, Mashable's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store