logo
Car detailing shop admits using ChatGPT to write fake 5-star reviews on sgCarMart , Singapore News

Car detailing shop admits using ChatGPT to write fake 5-star reviews on sgCarMart , Singapore News

AsiaOne03-07-2025
The owner of a local automotive detailer has admitted to generating fake five-star customer reviews and posting them on its business page on popular online car platform sgCarMart for the last two years.
This comes after the Competition and Consumer Commission of Singapore (CCCS) launched an investigation into Lambency Detailing in January, following a customer complaint regarding unauthorised reviews using her name.
In a media release on Thursday (July 3), the consumer watchdog said it confirmed with seven other Lambency Detailing customers that false reviews containing their names, car plate numbers, and photographs of their vehicles had been posted on sgCarMart without their consent.
CCCS said it also used digital technology and algorithms in its investigations, which found mass postings of suspicious five-star reviews on sgCarMart on certain dates.
When shown evidence, Holding company Quantum Globe, which owns and operates Lambency Detailing, admitted to having used their customers' information without their knowledge or consent to create the reviews.
Submitted through a QR code provided by sgCarMart.com, users did not need a prior account with sgCarMart.com, Facebook, or Google to leave reviews on businesses.
Quantum Globe also admitted that it had used ChatGPT to generate customised content based on the services each customer received.
The operator has agreed to stop posting fake reviews and set up a feedback channel for six months to allow the reporting of any fake reviews on sgCarMart.
It has also agreed to notify customers whose details were used in reviews posted by Quantum Globe as well as publish notices on sgCarMart and any online platforms it uses for marketing for a six-month period, to inform customers it had posted fake reviews and alert them of the feedback channel.
Lastly, the business has also agreed to remove any fake reviews on sgCarMart within eight working days, including the seven reviews identified by CCCS during investigations.
Quantum Globe director Matthew Lim has also given an undertaking to CCCS that he will not engage in any unfair trade practice or facilitate any business under his control to do so, said the consumer watchdog. Second fake review case: CCCS
SGCM, which owns and operates sgCarMart, has also informed CCCS it is exploring additional verification measures like SMS or email confirmation to enhance the integrity and authenticity of submitted reviews.
CCCS chief executive Alvin Koh said this is the second fake review case the regulator has uncovered, and the first involving both a third-party platform and the use of AI.
"When businesses post fake reviews to boost their ratings and popularity, they poison the well of consumer trust," he elaborated.
"Such deceptive practices, also known as 'dark patterns', not only mislead consumers but also disadvantage honest competing businesses."
The public can report cases of unfair trade practices to the Consumers Association of Singapore (Case) at 6277 5100 or https://crdcomplaints.azurewebsites.net/. 'Actively reviewing our content': Lambency Detailing
In a Facebook post on Thursday, Lambency Detailing said it was found that the reviews had been posted "by a staff member on behalf of customers, without their explicit knowledge or consent".
The detailer said it takes such matters very seriously, and that misrepresentation of customer feedback does not reflect the standards it strives to maintain.
It added that it has implemented stricter internal controls and staff training to prevent similar occurrences and will continue to cooperate with the relevant authorities.
"We are actively reviewing our content and strengthening our internal processes," said the business.
"We appreciate your continued trust as we work to uphold high standards of service and accountability."
[[nid:719148]]
lim.kewei@asiaone.com
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Alibaba cloud visionary expects big shakeup after OpenAI hype
Alibaba cloud visionary expects big shakeup after OpenAI hype

Business Times

time3 hours ago

  • Business Times

Alibaba cloud visionary expects big shakeup after OpenAI hype

[HONG KONG] OpenAI's ChatGPT started a revolution in artificial intelligence (AI) development and investment. Yet nine-tenths of the technology and services that have sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009. 'Probably 90 per cent of the AI people are talking about, I would say, will go away in five or 10 years because it's not really the essence of this technology,' said the computer scientist. 'But that's not bad, and it just helps us to explore.' Wang, who cemented his reputation at Microsoft Research Asia before joining Alibaba, knows a thing or two about thinking outside the box. Shortly after joining, he pitched the idea of a computing business to Alibaba's billionaire co-founder, Jack Ma. He recounted being nervous because he had no concrete business proposal, no models to present, just a conviction that the need for computing would explode in the coming years. He was right. Alicloud, as it's commonly known, is today a US$16 billion business. It not only underpins Alibaba's global e-commerce and logistics endeavours, but it's also the progenitor of the Qwen model, considered on par with DeepSeek and US rivals such as GPT and Gemini. Alibaba has gone all-in on AI, joining the race to build human-like intelligence. US and Chinese companies are investing billions of US dollars to develop a technology with the potential to turbocharge economies and, over the long run, tip the balance of geopolitical power. US President Donald Trump signed executive orders in a call to arms to ensure companies such as OpenAI and Google help safeguard America's lead in the post-ChatGPT era. BT in your inbox Start and end each day with the latest news stories and analyses delivered straight to your inbox. Sign Up Sign Up Wang refrained from addressing that broader conflict. But he did have some choice words for the way the likes of OpenAI and Meta Platforms have thrown money at the problem, including by signing on talented engineers at sports-megastar salaries. 'What happened in Silicon Valley is not the winning formula,' he said. 'It's really about innovation. So when you are in the early stage of innovation, I don't think talent is a problem because the only thing you need to do is to get the right person, not really the expensive person.' Going back almost two decades, Wang admits he never saw the present-day AI revolution coming so soon. All he envisioned was computing becoming as vital as electricity or oil. That should remain so for at least decades. As for China, Wang's firm belief is it will remain a hotbed of innovation, in part because it's one of the biggest technology laboratories in the world. 'It's a test bed for the new technology,' he said. 'People are just fascinated about technology. They are doing a lot of different things.' BLOOMBERG

Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

Straits Times

time8 hours ago

  • Straits Times

Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs. SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'. Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs. When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort. But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order. There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships. Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms. Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support. Top stories Swipe. Select. Stay informed. Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole Asia Singapore-only car washes will get business licences revoked, says Johor govt World Food airdropped into Gaza as Israel opens aid routes Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021 With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up. But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being. The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well. It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged. Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns. Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light. The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes. When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature. In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users. It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature. Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability. We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order. Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own. That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots. Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice. Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned. The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant. As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear. Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking. Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended. Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards. In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress. According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months. His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan. has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued for harms to their children, but it is unclear if it will be held liable. The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform. The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues. In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls. Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap. Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being. While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals. For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab
Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

Business Times

timea day ago

  • Business Times

Meta names ChatGPT co-creator as chief scientist of Superintelligence Lab

[NEW YORK] Meta Platforms has appointed Zhao Shengjia, co-creator of ChatGPT, as chief scientist of its Superintelligence Lab, CEO Mark Zuckerberg said on Friday (Jul 25), as the company accelerates its push into advanced artificial intelligence (AI). 'In this role, Shengjia will set the research agenda and scientific direction for our new lab, working directly with me and Alex,' Zuckerberg wrote in a Threads post, referring to Meta's chief AI officer Alexandr Wang, who Zuckerberg hired from startup Scale AI when Meta took a big stake in it. Zhao, a former research scientist at OpenAI, co-created ChatGPT, GPT-4 and several of OpenAI's mini models, including 4.1 and o3. He is among several researchers who have moved from OpenAI to Meta in recent weeks, part of a broader talent arms race as Zuckerberg aggressively hires from rivals to close the gap in advanced AI. Meta has been offering some of Silicon Valley's most lucrative pay packages and striking startup deals to attract top researchers, a strategy that follows the underwhelming performance of its Llama 4 model. Meta launched the Superintelligence Lab recently to consolidate work on its Llama models and long-term artificial general intelligence ambitions. Zhao is a co-founder of the lab, according to the Threads post, which operates separately from Fair, Meta's established AI research division led by deep learning pioneer Yann LeCun. Zuckerberg has said Meta aims to build 'full general intelligence' and release its work as open source, a strategy that has drawn both praise and concern within the AI community. REUTERS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store