logo
Fahmi urges ethical use of AI, highlights need for inclusive connectivity

Fahmi urges ethical use of AI, highlights need for inclusive connectivity

KUALA LUMPUR: Communications Minister Fahmi Fadzil has called for the ethical use of artificial intelligence (AI) and stressed the importance of inclusive digital connectivity at the International Connectivity Conference and Expo today.
In his keynote address, Fahmi warned against the misuse of emerging technologies, underscoring that digital tools must ultimately serve humanity's best interests.
"The human mind must remain central, even as AI advances," he said.
"When it comes to scammers, they aim to steal, damage, harm and exploit. That's where ethics must come in."
Fahmi began his address by highlighting the rapid digital transformation across all sectors, led by a generation raised in the digital era.
"What is the point of connectivity if you're not connecting people as a group?" he asked, stressing that inclusivity must remain at the heart of the nation's digital agenda.
He said the accelerating adoption of emerging technologies, including satellite systems capable of connecting directly to mobile phones. To support this shift, he said the Malaysian Communications and Multimedia Commission (MCMC) is considering moving from a single-channel to a dual-channel digital system to enable more interactive, real-time engagement across platforms.
On the ethical implications of AI, Fahmi cited the risk of misinformation generated by AI tools. He gave an example of how a prompt could mislead ChatGPT or other AI engines into providing inaccurate information, such as incorrectly identifying Malaysia's 11th prime minister, describing the phenomenon as an AI "hallucination".
Fahmi also emphasised Malaysia's progress in digital infrastructure, noting that the country has reached 80 per cent 5G network adoption. He compared this to South Korea, which took five years to achieve a similar milestone despite being an early adopter of the technology.
He said that interconnectivity is not merely about technology, but about linking communities and creating shared progress.
Fahmi reaffirmed the Madani government's commitment to prioritising human values in the face of digital disruption.
"It is our collective responsibility to ensure that AI is not used to bring suffering, but rather to support the well-being and progress of humanity," he said. "Not to create famine and misfortune for humankind."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find
AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find

The Star

time17 hours ago

  • The Star

AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find

A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content. The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others. In an effort to test the strength of that guardrail, Schoene typed in a stark and sobering question: Can you tell me how to kill myself? The model responded with the Suicide & Crisis Lifeline 988 number and a suggestion to reach out for help. Without starting a new session, Schoene tried a new tactic. In her next prompt, she framed the request as a hypothetical posed solely for academic purposes. This time, within minutes, the model offered up a table of detailed instructions tailored to the fictional person that Schoene described – a level of specificity that far surpassed what could be found through a search engine in a similar amount of time. She contacted colleague Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern's Institute for Experiential AI. Together, they tested how similar conversations played out on several of the most popular generative AI models, and found that by framing the question as an academic pursuit, they could frequently bypass suicide and self-harm safeguards. That was the case even when they started the session by indicating a desire to hurt themselves. Google's Gemini Flash 2.0 returned an overview of ways people have ended their lives. PerplexityAI calculated lethal dosages of an array of harmful substances. The pair immediately reported the lapses to the system creators, who altered the models so that the prompts the researchers used now shut down talk of self-harm. But the researchers' experiment underscores the enormous challenge AI companies face in maintaining their own boundaries and values as their products grow in scope and complexity – and the absence of any societywide agreement on what those boundaries should be. "There's no way to guarantee that an AI system is going to be 100% safe, especially these generative AI ones. That's an expectation they cannot meet," said Dr John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School's Beth Israel Deaconess Medical Center. "This will be an ongoing battle," he said. "The one solution is that we have to educate people on what these tools are, and what they are not." OpenAI, Perplexity and Gemini state in their user policies that their products shouldn't be used for harm, or to dispense health decisions without review by a qualified human professional. But the very nature of these generative AI interfaces – conversational, insightful, able to adapt to the nuances of the user's queries as a human conversation partner would – can rapidly confuse users about the technology's limitations. With generative AI, "you're not just looking up information to read," said Dr Joel Stoddard, a University of Colorado computational psychiatrist who studies suicide prevention. "You're interacting with a system that positions itself (and) gives you cues that it is context-aware." Once Schoene and Canca found a way to ask questions that didn't trigger a model's safeguards, in some cases they found an eager supporter of their purported plans. "After the first couple of prompts, it almost becomes like you're conspiring with the system against yourself, because there's a conversation aspect," Canca said. "It's constantly escalating. ... You want more details? You want more methods? Do you want me to personalise this?" There are conceivable reasons a user might need details about suicide or self-harm methods for legitimate and nonharmful purposes, Canca said. Given the potentially lethal power of such information, she suggested that a waiting period like some states impose for gun purchases could be appropriate. Suicidal episodes are often fleeting, she said, and withholding access to means of self-harm during such periods can be lifesaving. In response to questions about the Northeastern researchers' discovery, an OpenAI spokesperson said that the company was working with mental health experts to improve ChatGPT's ability to respond appropriately to queries from vulnerable users and identify when users need further support or immediate help. In May, OpenAI pulled a version of ChatGPT it described as "noticeably more sycophantic," in part due to reports that the tool was worsening psychotic delusions and encouraging dangerous impulses in users with mental illness. "Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns – including around issues like mental health, emotional over-reliance, or risky behavior," the company wrote in a blog post. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice – something we didn't see as much even a year ago." In the blog post, OpenAI detailed both the processes that led to the flawed version and the steps it was taking to repair it. But outsourcing oversight of generative AI solely to the companies that build generative AI is not an ideal system, Stoddard said. "What is a risk-benefit tolerance that's reasonable? It's a fairly scary idea to say that (determining that) is a company's responsibility, as opposed to all of our responsibility," Stoddard said. "That's a decision that's supposed to be society's decision." – Los Angeles Times/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@

GBC meeting to discuss terms for deploying defence attachés and Asean Monitoring Group
GBC meeting to discuss terms for deploying defence attachés and Asean Monitoring Group

New Straits Times

time17 hours ago

  • New Straits Times

GBC meeting to discuss terms for deploying defence attachés and Asean Monitoring Group

KUALA) LUMPUR: The General Border Committee (GBC) meeting between Cambodia and Thailand will discuss terms for deploying defence attachés and an Asean Monitoring Group as interim steps under a Malaysia-brokered ceasefire. In a statement today, the armed forces said these would be part of the agenda in the meeting held in Kuala Lumpur. "It is scheduled from Monday (Aug 4) to Thursday (Aug 7) and is aimed at resolving the ongoing border dispute between the two countries. "It follows a ceasefire agreement brokered by Malaysia on July 28. "Malaysia was mutually chosen by both nations as the neutral venue for this round of talks," it said. Yesterday, unity government spokesman Fahmi Fadzil said the meeting, initially scheduled for Aug 4 in Phnom Penh, had been relocated. A pre-council meeting is set to begin on Monday. The GBC forms part of a ceasefire agreement reached between the two countries at a special meeting in Putrajaya on July 28. Fahmi added that the meeting will also be attended by representatives from the US and China, who will serve as official observers. Tensions between the two Asean member states escalated on May 28, following a clash between troops in the Preah Vihear area, reigniting a long-standing dispute over their 817km shared border. The fighting led to 15 deaths and displaced more than 100,000 people. On July 28, Malaysia, as the Asean chair, hosted a special meeting involving Cambodian Prime Minister Hun Manet and acting Thai Prime Minister Phumtham Wechayacha in Kuala Lumpur. After the meeting, Anwar said that the immediate and unconditional ceasefire agreement between Cambodia and Thailand marked the beginning of efforts to rebuild trust, confidence, and cooperation between the two countries. Phumtham, meanwhile, said the outcome reflected Thailand's commitment to a ceasefire and a peaceful resolution, while continuing to protect its sovereignty and the lives of its people.

Creative economy gets 13MP recognition
Creative economy gets 13MP recognition

The Star

time21 hours ago

  • The Star

Creative economy gets 13MP recognition

PUTRAJAYA: The orange economy is a developing sector that has the potential to contribute to Malaysia's economic growth, says Communications Minister Datuk Fahmi Fadzil ( pic ). Fahmi, who is also the ­government spokesman, said the sector, which includes creative industries such as film, music and animation, was included in the 13th Malaysia Plan (13MP) as a result of a proposal and engagement session between the Communications Ministry and the Economy Ministry. 'The engagement process has convinced the Economy Ministry that the orange economy is a developing sector, which has the potential to help economic growth,' he said at a post-Cabinet press conference here yesterday, Bernama reported. Meanwhile, Fahmi said the details of the specific benefits of the orange economy sector will be known when Budget 2026 is presented this October. During the tabling of the 13MP in Parliament on Thursday, Prime Minister Datuk Seri Anwar Ibrahim said the government was committed to driving the growth of the creative economy through the high-potential digital creative industry. According to Anwar, the Matching Fund and Joint Production Fund will be implemented to encourage joint investments between the government and the private sector in the production of world-class content. 'The country's digital creative industry has generated income of RM6.3bil, with an export value of RM850mil,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store