Latest news with #GPAI


Euractiv
3 hours ago
- Business
- Euractiv
Commission publishes GenAI transparency tool days before rules kick in
On Thursday, the Commission published templates for AI companies to summarise the data they used to train their models – days before transparency rules for generative AI tools start to apply. The AI Act's rules for General Purpose AI models (GPAIs) – such as OpenAI's ChatGPT, MidJourney or Mistral's Le Chat – come into force on 2 August, applying legally binding transparency obligations on AI developers. Training data summaries that will be produced when AI developers fill in the templates are a key component of the law's push for transparency, as they will require GPAI makers to publicly disclose how their AI models are made – specifying which data went into building their systems. The Commission's AI training data template has been eagerly awaited by creative industries, which hope the transparency tool will help them enforce copyright claims against GPAIs. Under the template released today, AI providers will have to disclose the main datasets they used to train models. They also need to provide a narrative description of data that was scraped from the internet and any other sources of data. A Commission description of the template said the tool aims to strike a balance between enabling detailed enough disclosure to ensure effective transparency, while also allowing GPAI makers to protect commercially sensitive information. Ahead of entry into force of the AI Act's rules for GPAIs on 2 August, the Commission has been expected to publish several documents to support compliance. The template was the last item on the Commission's to-do list – after guidelines and a GPAI Code of Practice were published earlier this month. In recent weeks – with time running out before the legal deadline kicks in for GPAIs – industry had been pushing for the Commission to delay implementation. However, the Commission made it clear, multiple times, that the 2 August date stands. While the GPAI rules become applicable next week, the AI Office, which is the body in charge of enforcing the law, will not do so until August 2026 – giving the AI companies one more year before they could be fined for any breaches. Models that are already on the market have until August 2027 to abide by the rules. (nl)


Euronews
a day ago
- Business
- Euronews
Meta won't sign EU's AI Code, but who will?
A week before the new rules on general purpose artificial intelligence (GPAI) enter into force – affecting tools such as ChatGPT and Gemini – a clearer picture is emerging on where companies stand when it comes to signing up to the EU's voluntary Code of Practice on GPAI. US Big Tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation. The Code, which the European Commission released last week, is a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of GPAI models comply with the AI Act. Those providers who sign up are expected to be compliant with the AI Act and can anticipate more legal certainty, others will face more inspections. Here's who's in and who's out. Those that will sign US AI provider Anthropic, which developed AI assistant Claude as a competitor to OpenAI's ChatGPT and Google's Gemini, is the latest company that said it intends to sign the Code. 'We believe the Code advances the principles of transparency, safety and accountability—values that have long been championed by Anthropic for frontier AI development,' the company said in a statement. 'If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness,' the statement added. OpenAI said earlier last week that it will sign up too, claiming that Europe should now 'use this moment to empower [its] innovators to innovate and builders to build for Europe's future.' The drafting process of the Code, which began last September after the Commission selected a group of experts, was heavily criticised, mainly by rightsholders who feared violations of copyright law would increase, while US tech giants claimed the rules stifle innovation. Microsoft President Brad Smith told Reuters last week that his company will likely sign too. Smith said earlier this year that Microsoft wants to be 'a voice of reason' as geopolitical tensions rise. Those that will not sign US tech giant Meta was the first, and so far remains the only company to say it will not sign the Code. Chief Global Affairs Officer Joel Kaplan said in a statement last Friday that 'Europe is heading down the wrong path on AI.' After 'carefully reviewing' the Code, Meta will not sign because the document 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' Kaplan said. Gry Hasselbalch, a scholar working on data and AI ethics and contributor to the EU's AI ethics guidelines, told Euronews that the Code did not bring real change on how companies can implement general purpose AI in the EU. 'The companies, like Meta, that decide not to sign the code will still need to comply with the AI Act. Signing the code is therefore just a formality. They would still have to read it and follow it to understand when an AI system is considered a general purpose AI system and what transparency, copyright and security means in the AI Act,' Hasselbalch said. She added that the AI Act itself – rules that regulate AI systems and tools according to the risk they pose to society – 'has become a token in a geo-political battle.' 'The law was developed in a carefully designed and performed democratic process to create legal certainty for AI developers and adopters in the EU. In fact, most AI systems can be developed and used subject to existing legislation without additional legal obligations of the AI Act,' she said. Meta will still need to be compliant with the AI Act's obligations that will start applying on 2 August. Other Big Tech companies, including Amazon, Google, did not want to comment yet on whether they will sign. Providers that already have a GPAI model on the market will have to sign before 1 August, others can sign up at a later time, the Commission said. On that same day, the EU executive will publish a list of signatories. The Code requires approval by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office.


Time of India
6 days ago
- Business
- Time of India
Facebook-parent Meta refuses to sign EU's AI Code of Practice, here's why
Meta Platforms, the parent company of Facebook, Instagram and WhatsApps has officially refused to sign European Union's newly released AI code of Practice. It is a voluntary framework designed to enable companies to comply with the bloc's AI Act. Meta's Chief Global Affairs Officer Joel Kaplan announced this decision via a LinkedIn post. Meta's stance is rooted in concerns that the current framework of the EU's AI code could stifle innovation. Meta denies to comply with European Union's AI code of Practice Meta's vice president of Global Public Policy, Joel Kaplan said in a LinkedIn post that Europe might be 'heading down the wrong path'. He said that the code introduces ''legal uncertainties for model developers' and also impose requirements that go 'far beyond the scope of the AI Act'. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' wrote Kaplan in a LinkedIn post. 'Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, 44 of Europe's largest businesses – including Bosch, Siemens, SAP, Airbus and BNP – signed a letter calling for the Commission to 'Stop the Clock' in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' added Kaplan. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If you have a mouse, play this game for 1 minute Navy Quest Undo What European Union's AI code of Practice requires EU's AI code of Practice requires regular documentation updates for AI tools and it also bans training of AI with paired content. It also needs compliance with content owners' opt-out requests and systemic risk assessments and post-marketing monitoring. The act will come into effect from August 2 and it sets strict rules for general-purpose AI models such as Meta's Llama, OpenAI's ChatGPT, and Google's Gemini. While the code is voluntary, signing it offers companies legal clarity and reduced regulatory scrutiny. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


TechCrunch
6 days ago
- Business
- TechCrunch
Meta refuses to sign EU's AI code of practice
Meta has refused to sign the European Union's code of practice for its AI Act, weeks before the bloc's rules for providers of general-purpose AI models take effect. 'Europe is heading down the wrong path on AI,' wrote Meta's chief global affairs officer Joel Kaplan in a post on LinkedIn. 'We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The EU's code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc's legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services; bans developers from training AI on pirated content; and comply with content owners' requests to not use their works in their data sets. Calling the EU's implementation of the legislation 'over-reach,' Kaplan claimed that the law will 'throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.' A risk-based regulation for applications of artificial intelligence, the AI Act bans some 'unacceptable risk' use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of 'high-risk' uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk and quality management obligations. Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft and Mistral AI have been fighting the rules, even urging the European Commission to delay its roll out. But the Commission held firm, saying it will not change its timeline. Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of 'general-purpose AI models with systemic risk,' like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by that date.


Euronews
6 days ago
- Business
- Euronews
Meta rebuffs EU's AI Code of Practice
US social media company Meta will not sign the EU's AI Code of Practice on General Purpose AI (GPAI), the company's Chief Global Affairs Officer Joel Kaplan said in a statement on Friday. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for GPAI models and Meta won't be signing it,' he said, adding that the Code 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' The Commission last week released the Code, a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of AI models such as ChatGPT and Gemini comply with the AI Act. Companies that sign up are expected to be compliant with the Act and can anticipate more legal certainty, others will face more inspections. The AI Act's provisions affecting GPAI systems enter into force on 2 August. It will take another two years before the AI Act, which regulates AI systems according to the risk they pose to society, will become fully applicable. OpenAI, the parent company of ChatGPT, has said it will sign up to the Code once its ready. Criticism from tech giants The drafting process of the Code was criticised by Big Tech companies as well as CEOs of European companies, claiming they need more time to comply with the rules. 'We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' Kaplan said. The Code requires sign off by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office. The member states are expected to give a green light as early as 22 July. The EU executive said it will publish the list of signatories on 1 August. On Friday the Commission published further guidance to help companies comply with the GPAI rules.