logo
#

Latest news with #GenerativeAIGovernanceGuideline

An intelligent approach to AI governance
An intelligent approach to AI governance

Bangkok Post

timea day ago

  • Business
  • Bangkok Post

An intelligent approach to AI governance

Thailand has drafted principles for artificial intelligence (AI) legislation as it seeks to establish an AI ecosystem and widen adoption. The Electronic Transactions Development Agency (ETDA) recently completed an online public hearing on the draft and plans to submit it for cabinet consideration by the end of July. How was Thailand's AI regulatory framework developed? Sak Segkhoonthod, senior advisor at ETDA, said enforcement of AI rules thus far has been based on soft laws or guidelines. An AI law is needed to help Thailand efficiently deal with the impacts of the evolving technology, he said. Since 2022, Thailand has studied global models, especially the EU's AI Act, and introduced two draft laws: one focused on regulating AI-enabled business services, and another on promoting AI innovation. These two draft laws will be combined to form the basis of the AI law. Both drafts adopted a risk-based framework, classifying AI systems into prohibited, high risk and general use categories. ETDA took the lead in promoting AI governance rules, proposing four tiers. The first tier recommends Thailand work with other countries to enhance its global position in AI governance. The country also embraces Unesco's principles to advance AI governance, in line with international ethical standards. The second tier involves the concept of sectoral regulators overseeing policies in their respective areas. The third tier focuses on corporate implementation, where organisations adopt practical tools, guidelines and frameworks. ETDA already launched the AI Governance Guidelines for Executives and the Generative AI Governance Guideline for Organisations. "We have plans to release up to 50 guidelines or tools or checklists, including on AI procurement, job redesign and AI readiness assessment to assist organisations' in their AI transformation," said Mr Sak. The fourth tier promotes AI literacy at the individual level. What are the benefits of the AI law? He said the legislation aims to provide protection to users from potential AI risks, establish governance rules and remove legal barriers that existing laws cannot address, unlocking broader AI adoption. For example, the Transport Ministry's current regulations do not support the deployment of autonomous vehicles, as they were not designed to address an unmanned system. "The new AI law will support innovations," said Mr Sak. Having a dedicated AI law will help Thailand efficiently remove regulatory hurdles, he said. Relevant agencies can quickly develop their own organic AI laws based on the main law. The new law should also support tech entrepreneurs in testing AI in a controlled setting or regulatory sandbox, as well as in real world conditions, he said. The draft should permit the use of previously collected personal data, originally gathered for other purposes, in the development or testing of AI systems intended for public benefit, conducted under strict conditions, said Mr Sak. "By providing legal clarity and confidence, the law will encourage broader AI adoption across sectors through a combination of contextualised use of AI, sector-specific oversight and a common governance framework, ensuring consistency and minimising regulatory conflicts between different domains," he said. For example, using AI to monitor student behaviour may raise ethical concerns and be inappropriate in some contexts. In contrast, applying AI to monitor driver behaviour is essential to ensure passenger safety, said Mr Sak. What are the principles in the draft? The principles focus on supervising AI risks. Legal recognition should be granted to actions and outcomes produced by AI, and such recognition should not be denied solely because no human directly intervened unless there is a specific clause to allow the denial of such legal recognition, he said. As AI is a human-controlled tool, all actions and outcomes derived from AI must remain attributable to humans, said Mr Sak. Individuals may be legally exempt from acts or contracts generated by AI in cases where the party responsible for the AI could not have reasonably foreseen the AI's behaviour, and the other party was aware -- or should have reasonably been aware -- that such actions were unforeseeable to the responsible party, according to the draft. He said the law will not define a list of prohibited or high-risk AI applications, instead empowering sectoral regulators to define these lists based on their domain expertise. The draft proposed the providers of AI services are bound by duty of care to adopt risk management rules based on global guidelines and best practices. Overseas-based companies that provide AI services in Thailand will be required to set up legal representatives in the country. Law enforcement agencies can issue orders to stop AI service providers or users of AI from providing services or using AI, according to the draft. Companies that use AI to generate content are expected to label it or adopt relevant methods to inform consumers. Which authority oversees AI law enforcement? ETDA's AI Governance Center (AIGC) is expected to coordinate with related parties on law enforcement. The existing regulators in all sectors will define and enforce rules for high-risk AI in their domains, according to the draft. Under the AI law, two key committees will be established, with the regulator committee responsible for issuing practical frameworks and setting policies in coordination with the sectoral regulators. The expertise committee is tasked with monitoring and evaluating emerging AI risks to ensure timely and informed regulatory responses. What do companies think of the draft? Mr Sak said as of June 20, 80 organisations including Google and Microsoft submitted feedback during the recent public hearing. The majority praised the draft for striking a balance between prohibiting harmful uses and promoting innovation. However, some feedback raised concerns on whether sectoral regulators will be ready to efficiently supervise AI. In addition, the issue of AI sovereignty was highlighted, including the risk that foreign generative AI models may provide incomplete or inaccurate responses to users related to Thailand, due to limited local data representation. "We are considering the development of common benchmarking guidelines for privately owned large language models in the Thai language," he said. Ratanaphon Wongnapachant, chief executive of CLOUD, welcomed the AI legislation, calling it a timely step to prevent misuse and enforce responsible AI practices, particularly in sensitive sectors. Pochara Arayakarnkul, chief executive of Bluebik Group, expressed concern over the definition of AI in the upcoming legislation. He said if the definition is too broad, it could have far-reaching implications. Conversely, a narrow definition may fail to cover emerging risks. AI governance must go beyond a single risk dimension as each industry adopts AI in fundamentally different ways, with varying degrees of risk depending on how mature the technology is, said Mr Pochara. "The implications span multiple dimensions, from transparency and accountability to operational reliability," he said. Touchapon Kraisingkorn, head of AI Labs at Amity Group, proposed establishing objective, easy-to-understand criteria for defining high-risk and prohibited AI, using metrics such as the number of users, impact on fundamental rights or the monetary value of potential damages. "This would promote uniform interpretation across the private sector and reduce the discretionary burden on regulators," he said. Mr Touchapon also proposed a tiered compliance framework for small and medium-sized enterprises based on their size, as determined by revenue and employee count. He said this mechanism should be independent of a company's age, allowing startups the space to innovate before taking on the full scope of regulatory responsibilities as they mature. Moreover, a formal certification programme for "AI auditors" should be developed, complemented by the promotion of open-source tools for model clarity and risk assessment to ensure both industry and government have the necessary talent and tools to comply with new standards, said Mr Touchapon. "We strongly recommend an 'AI incident portal', which is a public, anonymised repository of AI system failures and rights violations that would be an invaluable resource, enabling all parties to learn and adapt quickly. This fosters a necessary culture of transparency and trust in AI systems," he said. For labelling or watermarking AI-generated content, Mr Touchapon recommended a phased approach, starting with a voluntary programme to assess its effectiveness before mandating a general requirement. This strategy allows for a timely response to deepfakes and misinformation without placing a premature or excessive burden on the industry, he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store