Latest news with #AIAct

Bangkok Post
15 hours ago
- Business
- Bangkok Post
An intelligent approach to AI governance
Thailand has drafted principles for artificial intelligence (AI) legislation as it seeks to establish an AI ecosystem and widen adoption. The Electronic Transactions Development Agency (ETDA) recently completed an online public hearing on the draft and plans to submit it for cabinet consideration by the end of July. How was Thailand's AI regulatory framework developed? Sak Segkhoonthod, senior advisor at ETDA, said enforcement of AI rules thus far has been based on soft laws or guidelines. An AI law is needed to help Thailand efficiently deal with the impacts of the evolving technology, he said. Since 2022, Thailand has studied global models, especially the EU's AI Act, and introduced two draft laws: one focused on regulating AI-enabled business services, and another on promoting AI innovation. These two draft laws will be combined to form the basis of the AI law. Both drafts adopted a risk-based framework, classifying AI systems into prohibited, high risk and general use categories. ETDA took the lead in promoting AI governance rules, proposing four tiers. The first tier recommends Thailand work with other countries to enhance its global position in AI governance. The country also embraces Unesco's principles to advance AI governance, in line with international ethical standards. The second tier involves the concept of sectoral regulators overseeing policies in their respective areas. The third tier focuses on corporate implementation, where organisations adopt practical tools, guidelines and frameworks. ETDA already launched the AI Governance Guidelines for Executives and the Generative AI Governance Guideline for Organisations. "We have plans to release up to 50 guidelines or tools or checklists, including on AI procurement, job redesign and AI readiness assessment to assist organisations' in their AI transformation," said Mr Sak. The fourth tier promotes AI literacy at the individual level. What are the benefits of the AI law? He said the legislation aims to provide protection to users from potential AI risks, establish governance rules and remove legal barriers that existing laws cannot address, unlocking broader AI adoption. For example, the Transport Ministry's current regulations do not support the deployment of autonomous vehicles, as they were not designed to address an unmanned system. "The new AI law will support innovations," said Mr Sak. Having a dedicated AI law will help Thailand efficiently remove regulatory hurdles, he said. Relevant agencies can quickly develop their own organic AI laws based on the main law. The new law should also support tech entrepreneurs in testing AI in a controlled setting or regulatory sandbox, as well as in real world conditions, he said. The draft should permit the use of previously collected personal data, originally gathered for other purposes, in the development or testing of AI systems intended for public benefit, conducted under strict conditions, said Mr Sak. "By providing legal clarity and confidence, the law will encourage broader AI adoption across sectors through a combination of contextualised use of AI, sector-specific oversight and a common governance framework, ensuring consistency and minimising regulatory conflicts between different domains," he said. For example, using AI to monitor student behaviour may raise ethical concerns and be inappropriate in some contexts. In contrast, applying AI to monitor driver behaviour is essential to ensure passenger safety, said Mr Sak. What are the principles in the draft? The principles focus on supervising AI risks. Legal recognition should be granted to actions and outcomes produced by AI, and such recognition should not be denied solely because no human directly intervened unless there is a specific clause to allow the denial of such legal recognition, he said. As AI is a human-controlled tool, all actions and outcomes derived from AI must remain attributable to humans, said Mr Sak. Individuals may be legally exempt from acts or contracts generated by AI in cases where the party responsible for the AI could not have reasonably foreseen the AI's behaviour, and the other party was aware -- or should have reasonably been aware -- that such actions were unforeseeable to the responsible party, according to the draft. He said the law will not define a list of prohibited or high-risk AI applications, instead empowering sectoral regulators to define these lists based on their domain expertise. The draft proposed the providers of AI services are bound by duty of care to adopt risk management rules based on global guidelines and best practices. Overseas-based companies that provide AI services in Thailand will be required to set up legal representatives in the country. Law enforcement agencies can issue orders to stop AI service providers or users of AI from providing services or using AI, according to the draft. Companies that use AI to generate content are expected to label it or adopt relevant methods to inform consumers. Which authority oversees AI law enforcement? ETDA's AI Governance Center (AIGC) is expected to coordinate with related parties on law enforcement. The existing regulators in all sectors will define and enforce rules for high-risk AI in their domains, according to the draft. Under the AI law, two key committees will be established, with the regulator committee responsible for issuing practical frameworks and setting policies in coordination with the sectoral regulators. The expertise committee is tasked with monitoring and evaluating emerging AI risks to ensure timely and informed regulatory responses. What do companies think of the draft? Mr Sak said as of June 20, 80 organisations including Google and Microsoft submitted feedback during the recent public hearing. The majority praised the draft for striking a balance between prohibiting harmful uses and promoting innovation. However, some feedback raised concerns on whether sectoral regulators will be ready to efficiently supervise AI. In addition, the issue of AI sovereignty was highlighted, including the risk that foreign generative AI models may provide incomplete or inaccurate responses to users related to Thailand, due to limited local data representation. "We are considering the development of common benchmarking guidelines for privately owned large language models in the Thai language," he said. Ratanaphon Wongnapachant, chief executive of CLOUD, welcomed the AI legislation, calling it a timely step to prevent misuse and enforce responsible AI practices, particularly in sensitive sectors. Pochara Arayakarnkul, chief executive of Bluebik Group, expressed concern over the definition of AI in the upcoming legislation. He said if the definition is too broad, it could have far-reaching implications. Conversely, a narrow definition may fail to cover emerging risks. AI governance must go beyond a single risk dimension as each industry adopts AI in fundamentally different ways, with varying degrees of risk depending on how mature the technology is, said Mr Pochara. "The implications span multiple dimensions, from transparency and accountability to operational reliability," he said. Touchapon Kraisingkorn, head of AI Labs at Amity Group, proposed establishing objective, easy-to-understand criteria for defining high-risk and prohibited AI, using metrics such as the number of users, impact on fundamental rights or the monetary value of potential damages. "This would promote uniform interpretation across the private sector and reduce the discretionary burden on regulators," he said. Mr Touchapon also proposed a tiered compliance framework for small and medium-sized enterprises based on their size, as determined by revenue and employee count. He said this mechanism should be independent of a company's age, allowing startups the space to innovate before taking on the full scope of regulatory responsibilities as they mature. Moreover, a formal certification programme for "AI auditors" should be developed, complemented by the promotion of open-source tools for model clarity and risk assessment to ensure both industry and government have the necessary talent and tools to comply with new standards, said Mr Touchapon. "We strongly recommend an 'AI incident portal', which is a public, anonymised repository of AI system failures and rights violations that would be an invaluable resource, enabling all parties to learn and adapt quickly. This fosters a necessary culture of transparency and trust in AI systems," he said. For labelling or watermarking AI-generated content, Mr Touchapon recommended a phased approach, starting with a voluntary programme to assess its effectiveness before mandating a general requirement. This strategy allows for a timely response to deepfakes and misinformation without placing a premature or excessive burden on the industry, he said.


Euronews
a day ago
- Business
- Euronews
EU Commission to call on companies to sign AI Code
The European Commission will next week stage a workshop in an effort try to convince companies to sign the Code of Practice on general-purpose AI (GPAI) before it enters into force on 2 August, according to a document seen by Euronews. The Code of Practice on GPAI, a voluntary set of rules, aims to help providers of AI models, such as ChatGPT and Gemini, comply with the EU's AI Act. The final version of the Code was set to come out early May but has been delayed. The workshop, organised by the Commission's AI Office, will discuss the final code of practice, as well as 'benefits of signing the Code', according to the internal document. In September 2024 the Commission appointed thirteen experts to draft the rules, using plenary sessions and workshops to gather feedback. The process has been criticised throughout, by tech giants as well as publishers and rights-holders concerned that the rules violate the EU's Copyright laws. The US government's Mission to the EU sent a letter to the EU executive pushing back against the Code in April, claiming that it stifles innovation. In addition, Meta's global policy chief, Joel Kaplan, said in February that it will not sign the Code because it took issue with the then latest version. An EU official told Euronews in May, that US companies 'are very proactive' and there was sense that 'they are pulling back because of a change in the administration', following the trade tensions between the US and EU. Euronews reported last month that US tech giants Amazon, IBM, Google, Meta, Microsoft and OpenAI have called upon the EU executive to keep its Code 'as simple as possible', to avoid redundant reporting and unnecessary administrative burdens'. A spokesperson for the European Commission previously said the Code will appear before early August, when the rules on GPAI tools enter into force. The Commission will assess companies' intentions to sign the code, and carry out an adequacy assessment with the member states. The EU executive can then decide to formalise the Code through an implementing act. The AI Act – which regulates AI tools according to the risks they pose to society – entered into force gradually last year, however, some provisions will only apply in 2027.


Euronews
a day ago
- Business
- Euronews
EU Commission to call on companies to sign AI Code in workshop
The European Commission will next week stage a workshop in an effort try to convince companies to sign the Code of Practice on general-purpose AI (GPAI) before it enters into force on 2 August, according to a document seen by Euronews. The Code of Practice on GPAI, a voluntary set of rules, aims to help providers of AI models, such as ChatGPT and Gemini, comply with the EU's AI Act. The final version of the Code was set to come out early May but has been delayed. The workshop, organised by the Commission's AI Office, will discuss the final code of practice, as well as 'benefits of signing the Code', according to the internal document. In September 2024 the Commission appointed thirteen experts to draft the rules, using plenary sessions and workshops to gather feedback. The process has been criticised throughout, by tech giants as well as publishers and rights-holders concerned that the rules violate the EU's Copyright laws. The US government's Mission to the EU sent a letter to the EU executive pushing back against the Code in April, claiming that it stifles innovation. In addition, Meta's global policy chief, Joel Kaplan, said in February that it will not sign the Code because it took issue with the then latest version. An EU official told Euronews in May, that US companies 'are very proactive' and there was sense that 'they are pulling back because of a change in the administration', following the trade tensions between the US and EU. Euronews reported last month that US tech giants Amazon, IBM, Google, Meta, Microsoft and OpenAI have called upon the EU executive to keep its Code 'as simple as possible', to avoid redundant reporting and unnecessary administrative burdens'. A spokesperson for the European Commission previously said the Code will appear before early August, when the rules on GPAI tools enter into force. The Commission will assess companies' intentions to sign the code, and carry out an adequacy assessment with the member states. The EU executive can then decide to formalise the Code through an implementing act. The AI Act – which regulates AI tools according to the risks they pose to society – entered into force gradually last year, however, some provisions will only apply in 2027.


Euronews
2 days ago
- Business
- Euronews
Conflicted consultants influencing EU's AI Code
Consultancies hired by the European Commission to support drafting a voluntary set of rules on general-purpose AI (GPAI) have conflicts of interest, according to a complaint to be filed with the European Ombudsman by non-profit campaign groups Corporate Europe Observatory (CEO) and LobbyControl on Wednesday. The Code of Practice on GPAI aims to help providers of AI models, such as ChatGPT and Gemini, comply with the EU's AI Act. The Commission in September appointed thirteen experts to draft the Code, using plenary sessions and workshops to gather feedback. In addition, the Commission's AI Office also looked for an external pool of expertise to support the drafting process, and awarded the contract to French consultancy Wavestone, the Italian consultancy Intellera, and the Brussels-based think tank Centre for European Policy Studies (CEPS). CEO and LobbyControl claim that both Wavestone and Intellera, part of the Accenture Group, have a direct commercial interest in the development of digital policies and specifically in rules on GPAI. Wavestone said in 2023 that it would work with Microsoft to deploy generative AI, specifically Copilot, in French companies. In 2024, when the drafting process for the Code began, it received a "Microsoft Partner of the Year" award for its work. Intellera's partner company Accenture yields substantial revenue through selling generative AI services to companies. In its tender specifications, the Commission said that "involved entities must not be subject to conflicting interests which may negatively affect the contract performance.' 'The EU's rules on conflicting interests are clear. If a consultancy has a vested commercial interest, the Commission should reject the contract,' according to the complaint. In a case from 2020, the EU Ombudsman warned about the awarding of policy-related contracts to companies and consultancies with a vested interest in the market they are advising on. Structural advantages The two NGOs published a report last month claiming that Big Tech companies 'enjoyed structural advantages' in the drafting process of the Code, and 'weakened the rules around advanced AI.' Their research suggested that tech companies had more access to the drafting process than others, a claim which the Commission later denied. The final version of the Code was set to come out early May but has been delayed. Previous drafts have been criticised by rightsholders and publishers claiming that there is a conflict with copyright laws, and by tech companies for being too restrictive. The EU executive said the code will appear before 2 August, when the rules on GPAI tools enter into force. The AI Act will be fully in force in 2027.


Techday NZ
2 days ago
- Business
- Techday NZ
Milestone & Genoa launch EU-compliant AI for smart cities
Milestone has commenced work on Project Hafnia in Europe, collaborating with the city of Genoa, Italy, to develop AI-driven solutions for traffic management and urban infrastructure using NVIDIA technology. The project's primary objective is to use artificial intelligence to enhance city operations by leveraging regulation-compliant video data, ensuring alignment with European legal frameworks, including GDPR and the EU's AI Act. Project Hafnia, after its launch in the United States, will provide high-quality video data that have been processed using NVIDIA NeMo Curator on the NVIDIA DGX Cloud platform. Milestone is adopting the NVIDIA Omniverse Blueprint for Smart City AI, which is a reference framework designed to optimise city operations through digital twins and AI agents. In addition to this, Milestone is expanding its proprietary data platform using NVIDIA Cosmos. This approach enables the generation of synthetic video data based on real-world inputs, combining both real and synthetic datasets to build and train vision language models (VLMs) responsibly. The company has engaged Nebius, a European-based cloud provider, to supply the GPU compute required for the training of these models. This partnership is intended to ensure that all data processing and storage remain fully compliant with European data protection regulations, while supporting digital sovereignty objectives and keeping sensitive public sector data strictly within EU jurisdiction. Urban AI applications Project Hafnia seeks to harness the potential of VLMs, which are AI models capable of mapping relationships between visual data—such as images or videos—and corresponding text. This enables the models to generate summaries and insights from visual sources, which can be applied across multiple domains including transportation, safety, and security within city environments. Emphasising the importance of regulatory compliance and ethical data sourcing, the project aims to support cities throughout Europe in building and refining computer vision and AI applications that align with the region's standards for privacy, transparency, and fairness. "I'm proud that with Project Hafnia we are introducing the world's first platform to meet the EU's regulatory standards, powered by NVIDIA technology. With Nebius as our European cloud provider, we can now enable compliant, high-quality video data for training vision AI models — fully anchored in Europe. This marks an important step forward in supporting the EU's commitment to transparency, fairness, and regulatory oversight in AI and technology — the foundation for responsible AI innovation," says Thomas Jensen, CEO of Milestone. The company states that the compliant and ethically sourced data library enabled by Project Hafnia provides the necessary foundation for developing advanced video analytics models and vision language models. The models are configured for optimal performance on NVIDIA GPUs and are compatible with NVIDIA AI Blueprint frameworks focused on video search and summarisation (VSS). Application in Genoa The first practical implementation from Project Hafnia is a European Visual Language Model purpose-built for transportation management. This VLM is developed using transportation data sourced directly from Genoa, Italy, ensuring that only compliant and responsibly gathered data are used. "AI is achieving extraordinary results, unthinkable until recently, and the research in the area is in constant development. We enthusiastically joined forces with Project Hafnia to allow developers to access fundamental video data for training new Vision AI models. This data-driven approach is a key principle in the Three-Year Plan for Information Technology, aiming to promote digital transformation in Italy and particularly within the Italian Public Administration," says Andrea Sinisi, Information Systems Officer, City of Genoa. The framework developed through Project Hafnia is designed for scalability, allowing it to extend across multiple domains and accommodate future technological developments. The resulting compliant data set and the fine-tuned VLM will be made available to participating cities under a controlled access licence model, facilitating broader AI adoption across Europe whilst upholding ethical standards. Nebius as cloud partner Nebius will provide the cloud infrastructure underpinning Project Hafnia in Genoa, ensuring that all processing power and data handling are carried out within the jurisdiction of the EU. This guarantees adherence to European data handling regulations and digital sovereignty imperatives. "Project Hafnia is exactly the kind of real-world, AI-at-scale challenge Nebius was built for," says Roman Chernin, Chief Business Officer of Nebius. "Supporting AI development today requires infrastructure engineered for high-throughput, high-resilience workloads, with precise control over where data lives and how it's handled. From our EU-based data centres to our deep integration with NVIDIA's AI stack, we've built a platform that meets the highest standards for performance, privacy and transparency." Milestone's approach with Project Hafnia positions it as an early adopter within the sector of European AI development, focusing on regulatory-compliant, ethically sourced, and technologically advanced infrastructure solutions for urban environments. Through partnerships with city administrations such as Genoa and technology providers including NVIDIA and Nebius, Milestone aims to facilitate responsible deployment of AI for urban improvement initiatives across Europe.