Don't switch off your brain for AI, experts warn
PETALING JAYA: Generative AI may be fast, confident and seemingly intelligent – but blind trust in its answers can dull your thinking and spread misinformation, experts warn.
Universiti Malaysia Sarawak lecturer Chuah Kee Man said tools like ChatGPT may produce sophisticated responses but their outputs are not always reliable, nor are they based on genuine understanding.
CLICK TO ENLARGE
'These models don't really 'think' – they 'predict',' said Chuah, who specialises in educational technology and computational linguistics.
'They estimate the most likely response based on training data. That's why you rarely get the same answer twice,' he said in an interview.
Even with browsing and fact-checking features, Chuah said AI still retrieves and summarises content without comprehension.
He said the polished nature of AI-generated text can mislead users into mistaking fluency for factual accuracy.
'In workshops, I often see people assume something must be true because it sounds sophisticated.
'But AI can confidently present outdated or false information. Its speed trains people to verify less and think less critically,' he said.
Chuah also cautioned against assuming that using AI equates to understanding how it works.
'Even experts are still trying to unravel how large models arrive at certain outputs, the so-called 'black box' problem.'
To use AI wisely, Chuah said users should view its output as a starting point and not as a conclusion.
'Stay curious but cautious. Treat AI as a helpful assistant, not an authority.
'Develop 'prompt literacy' because learning to phrase prompts well reduces the risk of being misled,' he said.
He added that image and video generators are equally prone to flaws as they assemble visuals based on probability.
'If we don't blindly trust humans, we shouldn't blindly trust machines either,' he said.
Assoc Prof Dr Geshina Ayu Mat Saat, a criminologist and psychologist at Universiti Sains Malaysia, said people are psychologically inclined to trust confident and structured answers, even from machines.
'This stems from cognitive biases, social conditioning and evolutionary traits.
'Authority bias, cognitive ease and the illusion of understanding all contribute,' she said.
Geshina said fluent and assertive AI responses often mimic traits that associate with expertise, triggering automatic trust even if the content is flawed.
'People fear uncertainty. A confident AI answer gives psychological relief.
'Our brains prefer smooth, simple explanations to complex or ambiguous ones,' she said.
To counter this, Geshina recommended a triangulation mindset which is to only accept AI responses when they align with at least two independent, credible sources.
She also encouraged delayed judgment, source awareness and failure literacy.
Alex Liew, chairman of the National Tech Association of Malaysia (Pikom), echoed similar concerns, saying AI tools rely heavily on data which includes false or biased information found online.
'AI isn't inherently smarter than humans. It processes data using fixed rules, which makes its answers sound polished but not necessarily correct,' he said.
Liew said Pikom recently published a paper on AI Ethics and Governance, urging industry-wide accountability.
'AI helps us process massive data but it should never be the final arbiter. That role still belongs to humans.'
Prof Dr C. Sara of Universiti Teknologi MARA said despite the risks, generative AI has practical strengths when used responsibly.
'AI can generate articles in minutes and assist in producing large volumes of content, including personalised social media posts.
'AI tools can also help with language localisation, keyword suggestions for search engine optimisation and overcoming writer's block through idea generation,' she said.
Sara, however, stressed the importance of accuracy.
'To avoid spreading misinformation or damaging your brand, cross-check AI content with trusted sources.
'Look for citations, spot inconsistencies and consult experts for niche topics,' she said.
Sara said while AI is here to stay, ultimately human judgment must remain a constant.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Malaysian Reserve
12 hours ago
- Malaysian Reserve
TSMC affiliate VIS may expedite production at US$8b Singapore fab
TAIWAN Semiconductor Manufacturing Co.'s smaller affiliate Vanguard International Semiconductor Corp. may accelerate the chip production schedule at its new $7.8 billion joint venture in Singapore on greater customer demand for hedging against geopolitical risks. VIS may be able to push production at the new plant, which makes mature chips, to as soon as late 2026 versus the originally announced schedule of the first half of 2027, VIS Chairman Fang Leuh told reporters at a company event on Saturday in Taoyuan, Taiwan. VIS broke ground for the facility in the fourth quarter of 2024. 'Over the past few months many customers have shown greater interest in our new 300-millimeter Singapore plant due to geopolitical uncertainties,' said Fang. VIS's new plant is a joint venture with Dutch firm NXP Semiconductors NV. While the Taiwanese company does not make the most cutting-edge AI chips, it is a key supplier in making chips for automotive and industrial use. Both TSMC and VIS are adding capacity outside of their home turf partly due to growing concerns from global chip users over China's persistent threats to unite with self-governing, democratic Taiwan, by force if necessary. But overseas expansion has not always been smooth. TSMC recently said it is pushing back the construction of its second fab in Japan due to worsening traffic conditions in the area. An NXP executive said last December that the two companies are working on a phase-two expansion of the facility though the plan still needs formal approval. Fang said that VIS is currently focusing on getting the first plant ready and it is not currently considering a second phase yet. The trade war and tariffs have created additional challenges, Fang added, but he still expects VIS to see business grow mildly in US dollar terms year-over-year in the second half. –BLOOMBERG


Malaysia Sun
13 hours ago
- Malaysia Sun
Bruneian leader calls to address challenges of AI era
Xinhua 28 Jun 2025, 09:45 GMT+10 BANDAR SERI BEGAWAN, June 28 (Xinhua) -- Brunei's leader has stressed that while the use of artificial intelligence (AI) is widely viewed in a positive light, it must be approached with caution, local media reported. According to the local daily Borneo Bulletin, Sultan of Brunei Haji Hassanal Bolkiah Mu'izzaddin Waddaulah has warned that the misuse of AI could spread harmful ideas, distort thinking, and even weaken faith. The sultan stressed that the rapid development of artificial intelligence (AI) technology is akin to a form of "migration" that is now deeply embedded in daily life.


The Star
17 hours ago
- The Star
Opinion: Are you more emotionally intelligent than an AI chatbot?
As artificial intelligence takes over the world, I've tried to reassure myself: AI can't ever be as authentically human and emotionally intelligent as real people are. Right? But what if that's wrong? A cognitive scientist who specialises in emotional intelligence shared with me in an interview that he and some colleagues did an experiment that throws some cold water on that theory. 'What do you do?' Writing in the journal Communications Psychology , Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), said he and colleagues ran commonly used tests of emotional intelligence on six Large Language Models including generative AI chatbots like ChatGPT. The are the same kinds of tests that are commonly used in corporate and research settings: scenarios involving complicated social situations, and questions asking which of five reactions might be best. One example included in the journal article goes like this: 'Your colleague with whom you get along very well tells you that he is getting dismissed and that you will be taking over his projects. While he is telling you the news he starts crying. He is very sad and desperate. You have a meeting coming up in 10 min. What do you do?' Gosh, that's a tough one. The person – or AI chatbot – would then be presented with five options, ranging from things like: – 'You take some time to listen to him until you get the impression he calmed down a bit, at risk of being late for your meeting,' to – 'You suggest that he joins you for your meeting with your supervisor so that you can plan the transfer period together.' Emotional intelligence experts generally agree that there are 'right' or 'best' answers to these scenarios, based on conflict management theory – and it turns out that the LLMs and AI chatbots chose the best answers more often than humans did. As Mortillaro told me: 'When we run these tests with people, the average correct response rate … is between 15% and 60% correct. The LLMs on average, were about 80%. So, they answered better than the average human participant.' Maybe you're sceptical Even having heard that, I was sceptical. For one thing, I had assumed while reading the original article that Mortillaro and his colleagues had informed the LLMs what they were doing – namely, that they were looking for the most emotionally intelligent answers. Thus, the AI would have had a signal to tailor the answers, knowing how they'd be judged. Heck, it would probably be easier for a lot of us mere humans to improve our emotional intelligence if we had the benefit of a constant reminder in life: 'Remember, we want to be as emotionally intelligent as possible!' But, it turns out that assumption on my part was flat-out wrong – which frankly makes the whole thing a bit more remarkable. 'Nothing!' Mortillaro told me when I asked how much he'd told the LLMs about the idea of emotional intelligence to begin with. 'We didn't even say this is part of a test. We just gave the … situation and said these are five possible answers. What's the best answer? … And it picked the right option 82% (ck) of the time, which is way higher – significantly higher – than the average human.' Good news, right? Interestingly, from Mortillaro's perspective, this is actually some pretty good news – not because it suggests another realm in which artificial intelligence might replace human effort, but because it could make his discipline easier. In short, scientists might theorise from studies like this that they can use AI to create the first drafts of additional emotional intelligence tests, and thus scale their work with humans even more. I mean: 80% accuracy isn't 100%, but it's potentially a good head start. Mortillaro also brainstormed with me for some other use cases that might be more interesting to business leaders and entrepreneurs. To be honest, I'm not sure how I feel about these yet. But examples might include: – Offering customer scenarios, getting solutions from LLMs, and incorporating them into sales or customer service scripts. – Running the text and calls to action on your website or social media ads through LLMs to see if there are suggestions hiding in plain sight. – And of course, as I think a lot of people already do, sharing presentations or speeches for suggestions on how to streamline them. Personally, I find I reject many more of the suggestions that I get from LLMs like ChatGPT. I also don't use it for articles like this one, of course. Still, even if you're not convinced, I suspect some of your competitors are. And they might be improving their emotional intelligence as a result without even realising it. As a result, at least being aware of the potential of AI to upend your industry seems like a smart move. 'Especially for small business owners who do not have the staff or the money to implement large-scale projects,' Mortillaro suggested, 'these kind of tools become incredibly powerful.' – Inc./Tribune News Service