Google makes new proposal to stave off EU antitrust fine, document shows
The U.S. tech giant has been under pressure after being hit in March with European Union antitrust charges of unfairly favouring its own services such as Google Shopping, Google Hotels and Google Flights over competitors.
The company, owned by Alphabet, will meet its rivals and the European Commission to discuss its proposals during a July 7-8 workshop in Brussels, the document said.
The EU's landmark Digital Markets Act, under which Google has been charged, sets out a list of dos and don'ts for Big Tech aimed at curbing their power and giving rivals more room to compete and consumers more choice.
Last week, Google offered to create a box at the top of the search page for a so-called vertical search service (VSS) which would contain links to specialised search engines as well as to hotels, airlines, restaurants and transport services.
The latest offer, called Option B, is an alternative to last week's proposal, according to a Google document sent by the Commission to involved parties and seen by Reuters.
"Under 'Option B', whenever a VSS box is shown, Google will also show a box that includes free links to suppliers," the document said.
The box for suppliers, in essence hotels, restaurants, airlines and travel services, would be below the VSS box, with Google organising the information about the suppliers.
Option B "provides suppliers opportunities while not creating a box that can be characterised as a Google VSS", the document said.
"We've made hundreds of alterations to our products as part of our DMA compliance," a Google spokesperson said.
"While we strive for compliance, we remain genuinely concerned about some of the real world consequences of the DMA, which are leading to worse online products and experiences for Europeans."
Google risks a fine as much as 10% of its global annual revenue if found in breach of the DMA.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
13 minutes ago
- Mint
India markets regulator to widen probe into Jane Street, source says
MUMBAI (Reuters) -India's markets regulator has widened an investigation into alleged market manipulation by U.S. securities trading firm Jane Street to include other indexes and exchanges, a source familiar with the matter said. The Securities and Exchange Board of India earlier on Friday barred Jane Street from buying and selling securities in the Indian market and also seized $567 million of its funds. SEBI and Jane Street did not immediately respond to Reuters' requests for comment. The regulator has alleged that Jane Street bought large quantities of constituents in India's Bank Nifty index in the cash and futures markets to artificially support the index in morning trade, while simultaneously building large short positions in index options. Later in the day, Jane Street reversed the trades to profit from options positions, SEBI said in its 105-page order. In response, Jane Street said it disputes the findings of the SEBI interim order and will further engage with the regulator. "Jane Street is committed to operating in compliance with all regulations in the regions we operate around the world," the firm said. The company can file its reply or any objections to the order within 21 days, SEBI said. It can also challenge the order judicially via the Securities Appellate Tribunal. The regulator expects to take time to complete the investigation, the source said, without providing more details on the regulator's plan to widen its investigation. India is the world's largest derivatives market, accounting for nearly 60% of global equity derivative trading volumes of 7.3 billion trades in April, the Futures Industry Association says. The burst of derivative trading activity, which includes a host of retail investors, has prompted the regulator to limit the number of contract expiries and increase the size of trading lots to make the derivatives more costly to trade. But the source said that while retail participation in index options trading on expiry day has moderated, there appears to be still too much concentration in short-term expiries and short-term trading. "Extending maturities and nudging more long-term trading, hedging, and investments would be ideal for our ecosystem," the person said, referring to India's capital markets. (Reporting by Jayshree P. Upadhyay in Mumbai and Nandan Mandayan in Bengaluru, Editing by Louise Heavens and Jane Merriman)

Time of India
15 minutes ago
- Time of India
Putin Aide Lavrov, Kremlin ‘DUMP' Ukraine Peace After Trump Call; Russia CARPET-BOMBS Kyiv
'India Will Sign FTAs Only When...': Piyush Goyal's Big Statement Before US Trade Deal Announcement Commerce Minister Piyush Goyal has made it clear that India will not sign Free Trade Agreements (FTAs) under pressure or to meet arbitrary deadlines. Speaking at the 16th International Toy Biz Exhibition, he emphasized that India negotiates trade deals based solely on mutual benefit, fairness, and national interest. Talks are ongoing with the EU, New Zealand, Oman, Chile, and others. While progress was made in Washington, India is firm on resolving key issues in agriculture and auto sectors. Goyal also announced a new scheme to boost domestic toy manufacturing. Meanwhile, India has formally notified the WTO of its plan to impose retaliatory tariffs on selected US goods in response to America's recent 25% duty hike on cars and auto parts. Watch this space as India asserts its sovereignty on trade terms, with a focus on self-reliance and fair play in global markets.#piyushgoyal #india #unitedstates #indiaustradedeal #freetradeagreement #ustariff #indiantradepolicy #ftaindia #usindianegotiations #wtoresponse #toymanufacturing #makeinindia #indianeconomy #automobiletariff #aatmanirbharbharat #toi #toibharat #bharat #trending #breakingnews #indianews 7.8K views | 1 hour ago


Time of India
40 minutes ago
- Time of India
ChatGPT, Gemini & others are doing something terrible to your brain
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models. Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.' But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain. Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky. Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis. This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros. 'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.' Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to. But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism. The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality. That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.' If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.