logo
#

Latest news with #GPAI

After the trade deal..., Newsletter
After the trade deal..., Newsletter

Euronews

time2 days ago

  • Business
  • Euronews

After the trade deal..., Newsletter

Key diary dates Tuesday 29 July: Meeting of the EU Council's trade policy committee. Tuesday 29-Wednesday 30 July: Meetings of the EU Council's ad hoc working group on the EU long-term budget, the Multiannual Financial Framework. Saturday 2 August: EU rules on general purpose artificial intelligence enter into force. In spotlight As the EU institutions pause activity for the summer, the EU Council continues to see some behind the scenes activity this week on two key issues: US trade and the EU's long term budget. If early reactions to the deal struck by Commission President Ursula von der Leyen and US President Donald Trump are anything to go by there are likely to be some barbed responses when its trade policy committee meets on Tuesday. Differing approaches among EU member states to the negotiations were arguably one the key weaknesses of the EU negotiating team. How could it threaten use of the anti-coercion instrument, for example, when France and Germany remained at odds over whether to use it for a long period of the negotiation. But if it was a weakness during the negotiations, then it could also bedevil von der Leyen's team now that the ink is dry on an agreement. As the detailed readout of the deal becomes clearer the Commission will be eager to ensure that sniping from member states doesn't put effective implementation at risk. Earlier this month the European Commission began the painstaking process of wrangling over the EU's next seven-year budget, the Multi-Annual Financial Framework, proposing nearly €2tn for 'new and emerging challenges' between 2028 and 2034. That will also occupy the Council this week, and the usual fierce debate can be expected behind the scenes between the EU's 'frugal' members and the net spending countries. Usually points of difference surround the numbers apportioned to the different budget pots, but this time around the Commission's proposal adds another dimension to what is usually considered one of the most intractable negotiations: a wholesale change to the methodology. Where agricultural subsidies were formerly transferred to the poorer regions and administered locally, under the proposal these would be delivered through national plans proposed by member states, reflecting the way the post-pandemic recovery fund was distributed. That change at least might be well received across the member states since it boosts the powers of governments over their regions. The Policy Briefing will take a pause for the summer and return on 25 August. Policy newsmakers AI tech giant coders? Ahead of the new rules on general purpose artificial intelligence (GPAI) entering into force – affecting tools such as ChatGPT and Gemini – a clear picture has emerged on where companies stand when it comes to signing up to the EU's voluntary Code of Practice on GPAI. US Big Tech giant Meta has slammed the rules for stifling innovation, with Chief Global Affairs Officer Joel Kaplan saying that 'Europe is heading down the wrong path on AI.' Meanwhile Microsoft President Brad Smith has said his company will likely sign. Smith said earlier this year that Microsoft wants to be 'a voice of reason' as geopolitical tensions rise. Policy Poll Data brief

As the AI Act falls short on protecting copyright, creatives eye licensing reform
As the AI Act falls short on protecting copyright, creatives eye licensing reform

Euractiv

time5 days ago

  • Business
  • Euractiv

As the AI Act falls short on protecting copyright, creatives eye licensing reform

Copyright holders are turning their attention towards a potential licensing framework, after being left disappointed by the AI Act's transparency obligations and Code of Practice for general-purpose AIs (GPAIs). The Commission released Thursday much-anticipated template for AI developers to summarise the data used to train their models. However, the measure has been met with scepticism from creators, who believe that transparency alone is not enough to safeguard their copyrights. By nature, GPAI models – the technology that underpins the likes of ChatGPT, Le Chat, or Midjourney – require vast amounts of data to produce outputs that are more accurate and less likely to be biased. But the provenance of the data is often opaque, and creatives and artists worry that they have no way of knowing whether their work was used to train AI systems – and therefore no way to object to its use. Transparency as a first step The EU's landmark Artificial Intelligence Act is intended to bring greater transparency to AI systems and enable creators to assert their copyright claims. The law requires developers of GPAI models to produce summaries of the data used for training, and to implement systems allowing right holders to "opt out" of having their content used for model development. On Thursday, the Commission published the template it expects AI developers to use for these summaries of training data. This aspect of the AI Act faced intense lobbying by copyright holders, who wanted the templates to be as granular as possible, and AI companies, who worried too much detail could reveal trade secrets. But industry is not convinced by the Commission's implementation. "It falls short of safeguarding the creative sector and, if not corrected, risks undermining Europe's AI Act and copyright framework in favour of a few global tech companies," Burak Özgen, deputy general manager of GESAC, the authors and composers' lobby, told Euractiv. A Code of Practice for GPAIs was also negotiated between experts, lobbyists and civil society over several months to further flesh out compliance around key issues such as copyright. But for rights holders, the final text also fell short. The improvements in the final version of the Code are " certainly insufficient", said Özgen, arguing that it lacks "concrete" detail which would make it "actionable". His blunt summary is that the Code does "nothing useful to help exercising and enforcing the rights of authors". Copyright rules fit for GenAI At the core of the disagreement between right holders and AI developers is how copyright rules apply to generative AI tools – and what should come next. The EU's Copyright Directive allows the use of software that crawls the internet, on lawfully accessed websites and databases, to collect copyrighted text and images for data analytics or research – aka text and data mining (TDM) – unless copyright holders have actively opted out of having their work scraped. For the tech lobby CCIA, the TDM exception is essential to support AI innovation. The rules were " carefully designed to strike a vital balance between fostering innovation and protecting intellectual property," CCIA Europe's Senior Policy Manager, Boniface de Champris, told Euractiv. However, a study ordered by the Parliament's legal affairs committee takes a very different view, finding that the TDM rules were not intended for generative AI – and "do not provide legal certainty, transparency, or effective rights control", as the report puts it. The two camps remain divided over this issue, which may be why right holders are now turning their attention towards a potential dedicated framework for licensing agreements. Finding a way to be appropriately compensated for use of their works to train AI remains a central concern for them, especially as GenAI uptake accelerates. A draft Parliament report led by MEP Axel Voss – who also negotiated the EU's existing Copyright Directive – makes the case for a new framework to enable licensing deals to be concluded. The report must still be amended by other MEPs before it can represent the Parliament's official position. But the composer and songwriters lobby ECSA was happy that Voss' draft report " rejects the application of the TDM exceptions to Generative AI, and calls to ensure fair remuneration," as Helienne Lindvall, its president, told Euractiv. The ball is now in the Commission's court. It must decide whether to respond to the rise of GenAI by revising the EU's copyright framework – with the Copyright Directive up for review in 2026 – and, if so, figure out how to ensure that any new rules strike the right balance between support for AI innovation and protecting human creativity. (nl, aw)

Commission publishes GenAI transparency tool days before rules kick in
Commission publishes GenAI transparency tool days before rules kick in

Euractiv

time6 days ago

  • Business
  • Euractiv

Commission publishes GenAI transparency tool days before rules kick in

On Thursday, the Commission published templates for AI companies to summarise the data they used to train their models – days before transparency rules for generative AI tools start to apply. The AI Act's rules for General Purpose AI models (GPAIs) – such as OpenAI's ChatGPT, MidJourney or Mistral's Le Chat – come into force on 2 August, applying legally binding transparency obligations on AI developers. Training data summaries that will be produced when AI developers fill in the templates are a key component of the law's push for transparency, as they will require GPAI makers to publicly disclose how their AI models are made – specifying which data went into building their systems. The Commission's AI training data template has been eagerly awaited by creative industries, which hope the transparency tool will help them enforce copyright claims against GPAIs. Under the template released today, AI providers will have to disclose the main datasets they used to train models. They also need to provide a narrative description of data that was scraped from the internet and any other sources of data. A Commission description of the template said the tool aims to strike a balance between enabling detailed enough disclosure to ensure effective transparency, while also allowing GPAI makers to protect commercially sensitive information. Ahead of entry into force of the AI Act's rules for GPAIs on 2 August, the Commission has been expected to publish several documents to support compliance. The template was the last item on the Commission's to-do list – after guidelines and a GPAI Code of Practice were published earlier this month. In recent weeks – with time running out before the legal deadline kicks in for GPAIs – industry had been pushing for the Commission to delay implementation. However, the Commission made it clear, multiple times, that the 2 August date stands. While the GPAI rules become applicable next week, the AI Office, which is the body in charge of enforcing the law, will not do so until August 2026 – giving the AI companies one more year before they could be fined for any breaches. Models that are already on the market have until August 2027 to abide by the rules. (nl)

Meta won't sign EU's AI Code, but who will?
Meta won't sign EU's AI Code, but who will?

Euronews

time7 days ago

  • Business
  • Euronews

Meta won't sign EU's AI Code, but who will?

A week before the new rules on general purpose artificial intelligence (GPAI) enter into force – affecting tools such as ChatGPT and Gemini – a clearer picture is emerging on where companies stand when it comes to signing up to the EU's voluntary Code of Practice on GPAI. US Big Tech giant Meta said last week that it will not sign, having slammed the rules for stifling innovation. The Code, which the European Commission released last week, is a voluntary set of rules that touches on transparency, copyright, and safety and security issues, aiming to help providers of GPAI models comply with the AI Act. Those providers who sign up are expected to be compliant with the AI Act and can anticipate more legal certainty, others will face more inspections. Here's who's in and who's out. Those that will sign US AI provider Anthropic, which developed AI assistant Claude as a competitor to OpenAI's ChatGPT and Google's Gemini, is the latest company that said it intends to sign the Code. 'We believe the Code advances the principles of transparency, safety and accountability—values that have long been championed by Anthropic for frontier AI development,' the company said in a statement. 'If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness,' the statement added. OpenAI said earlier last week that it will sign up too, claiming that Europe should now 'use this moment to empower [its] innovators to innovate and builders to build for Europe's future.' The drafting process of the Code, which began last September after the Commission selected a group of experts, was heavily criticised, mainly by rightsholders who feared violations of copyright law would increase, while US tech giants claimed the rules stifle innovation. Microsoft President Brad Smith told Reuters last week that his company will likely sign too. Smith said earlier this year that Microsoft wants to be 'a voice of reason' as geopolitical tensions rise. Those that will not sign US tech giant Meta was the first, and so far remains the only company to say it will not sign the Code. Chief Global Affairs Officer Joel Kaplan said in a statement last Friday that 'Europe is heading down the wrong path on AI.' After 'carefully reviewing' the Code, Meta will not sign because the document 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' Kaplan said. Gry Hasselbalch, a scholar working on data and AI ethics and contributor to the EU's AI ethics guidelines, told Euronews that the Code did not bring real change on how companies can implement general purpose AI in the EU. 'The companies, like Meta, that decide not to sign the code will still need to comply with the AI Act. Signing the code is therefore just a formality. They would still have to read it and follow it to understand when an AI system is considered a general purpose AI system and what transparency, copyright and security means in the AI Act,' Hasselbalch said. She added that the AI Act itself – rules that regulate AI systems and tools according to the risk they pose to society – 'has become a token in a geo-political battle.' 'The law was developed in a carefully designed and performed democratic process to create legal certainty for AI developers and adopters in the EU. In fact, most AI systems can be developed and used subject to existing legislation without additional legal obligations of the AI Act,' she said. Meta will still need to be compliant with the AI Act's obligations that will start applying on 2 August. Other Big Tech companies, including Amazon, Google, did not want to comment yet on whether they will sign. Providers that already have a GPAI model on the market will have to sign before 1 August, others can sign up at a later time, the Commission said. On that same day, the EU executive will publish a list of signatories. The Code requires approval by EU member states, which are represented in a subgroup of the AI Board, as well as by the Commission's own AI Office.

Facebook-parent Meta refuses to sign EU's AI Code of Practice, here's why
Facebook-parent Meta refuses to sign EU's AI Code of Practice, here's why

Time of India

time18-07-2025

  • Business
  • Time of India

Facebook-parent Meta refuses to sign EU's AI Code of Practice, here's why

Meta Platforms, the parent company of Facebook, Instagram and WhatsApps has officially refused to sign European Union's newly released AI code of Practice. It is a voluntary framework designed to enable companies to comply with the bloc's AI Act. Meta's Chief Global Affairs Officer Joel Kaplan announced this decision via a LinkedIn post. Meta's stance is rooted in concerns that the current framework of the EU's AI code could stifle innovation. Meta denies to comply with European Union's AI code of Practice Meta's vice president of Global Public Policy, Joel Kaplan said in a LinkedIn post that Europe might be 'heading down the wrong path'. He said that the code introduces ''legal uncertainties for model developers' and also impose requirements that go 'far beyond the scope of the AI Act'. 'Europe is heading down the wrong path on AI. We have carefully reviewed the European Commission's Code of Practice for general-purpose AI (GPAI) models and Meta won't be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act,' wrote Kaplan in a LinkedIn post. 'Businesses and policymakers across Europe have spoken out against this regulation. Earlier this month, 44 of Europe's largest businesses – including Bosch, Siemens, SAP, Airbus and BNP – signed a letter calling for the Commission to 'Stop the Clock' in its implementation. We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them,' added Kaplan. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If you have a mouse, play this game for 1 minute Navy Quest Undo What European Union's AI code of Practice requires EU's AI code of Practice requires regular documentation updates for AI tools and it also bans training of AI with paired content. It also needs compliance with content owners' opt-out requests and systemic risk assessments and post-marketing monitoring. The act will come into effect from August 2 and it sets strict rules for general-purpose AI models such as Meta's Llama, OpenAI's ChatGPT, and Google's Gemini. While the code is voluntary, signing it offers companies legal clarity and reduced regulatory scrutiny. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store