logo
Warning to all 1.8bn Gmail users over ‘hidden danger' that steals password without you noticing – what to watch out for

Warning to all 1.8bn Gmail users over ‘hidden danger' that steals password without you noticing – what to watch out for

The Irish Sun17-07-2025
AN URGENT warning has been issued for over a billion Gmail users amid a "hidden danger" which is stealing passwords - and this is what you need to watch out for.
The new type of attack has been flying under the radar, attacking an eye-watering 1.8 billion Gmail users without them even noticing.
2
Malicious actors are targeting 1.8 billion Gmail users through an email scam
Credit: Getty
Users therefore need to make sure they follow the correct instructions in order to combat the malicious activity.
Thieving hackers are using Google
Gemini
- the company's AI built-in tool - to trick users into giving over their
Cybersecurity experts have found that
These tricks users into
READ MORE TECH NEWS
The
Shady
GenAI bounty manager Marco Figueroa demonstrated how such a dangerous prompt could falsely alert users that their email account has been compromised.
These warnings would urge victims to call a fake "Google support" phone number provided, in order to resolve the issue.
Most read in Tech
To fight these prompt injection attacks, experts have made a number of recommendations that users should act on immediately.
They firstly suggested that companies
Google adds AI upgrade to your Gmail that writes emails for you – find it in seconds if you're eligible for freebie
This should help counter hackers sending invisible text within emails.
Security experts also recommended that users implement post-processing filters to scan inboxes for suspicious elements like "urgent messages", URLs, or phone numbers.
This action could bolster defences against threats.
The scam was brought to light after research, spearheaded by Mozilla's 0Din security team, showed proof of one of the hostile attacks last week.
The report showed how hackers tricked Gemini into showing a fake security alert.
It warned users their password had been stolen - but the message was fake and designed to steal their info.
The trick works by hiding a secret size zero font prompt in white text that matches the email background.
So when someone clicks "summarise this email" using Gemini, the tool reads the hidden message - not just the visible bit.
This form of manipulation is named "indirect prompt injection", and it takes advantage of AI's inability to differentiate between a user's question and a hacker's embedded message.
AI cannot tell the difference, as both messages look like text, and it will usually follow whichever comes first - even if it is malicious.
As Google have failed to patch this method of scamming victims, the door is still open for hackers to exploit this technique.
Sneaking in commands that the AI may follow will be an effective method of leaking sensitive data until users are properly protected against the threat.
AI is also incorporated into Google Docs, Calendar, and outside apps - widening the scope of the potential risk.
Google has reminded users amid this scamming crisis that it does not issue security alerts through Gemini summaries.
So if a summary tells you that your password is at risk, or prompts you with a link to click - users should always treat it as suspicious and delete the email.
2
Users need to follow the steps to protect against the scam
Credit: Alamy
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Fears of Amazon price hike and MORE ads as company boss teases worrying change for shoppers
Fears of Amazon price hike and MORE ads as company boss teases worrying change for shoppers

The Irish Sun

time7 hours ago

  • The Irish Sun

Fears of Amazon price hike and MORE ads as company boss teases worrying change for shoppers

AMAZON has sparked concern that more ads could pop up on Alexa devices as it introduces new AI features to the popular voice assistant. The company's new AI supercharged Alexa is still rolling out, which users can pay for if they don't have a Prime membership. Advertisement 2 Alexa is the world's most famous digital voice assistant Credit: Getty 2 Alexa+ is still rolling out in the US with no update on when it'll land in the UK Credit: Getty Alexa+ is optional and costs $19.99 (£15) per month without a Prime subscription. But there could be even more ads on the way as Amazon looks to make its devices profitable. Currently, users already see some limited ads, such as on Echo Show displays. There's also the "By the way" feature that makes subtle suggestions to you based off what you ask Alexa. Advertisement Amazon chief Andy Jassy has now hinted other ads are in the works. "There will be opportunities, as people are engaging in more multiturn conversations [with Alexa Plus], to have advertising play a role to help people find discovery, and also as a lever to drive revenue," he said during a recent investor call. TechCrunch has speculated that this could mean paying extra for an ad-free tier of Alexa. But users have raged over the idea and mocked possible scenarios. Advertisement "When you tell it to turn on your lights, it'll turn on the lights and then give a 15 second advertisement for other lights you could buy on one joked on Reddit. "All I want it to do is provide random facts when I ask about them, play the music I ask it to and adjust the household settings I tell it to," another person commented. "I don't want suggestions and I sure as s*** won't be listening to ads." A third added: "They are a glorified Bluetooth speaker. Advertisement "I will throw mine in the trash before I suffer through a single ad." And a fourth said: "I already stopped watching prime because of the ads. I'm fine dropping Alexa too." There's still no word on when Alexa+ will be rolled out to other countries including the UK.

Is AI making our brains lazy?
Is AI making our brains lazy?

RTÉ News​

time11 hours ago

  • RTÉ News​

Is AI making our brains lazy?

Have you ever googled something but couldn't remember the answer a short time later? Well, you're probably not alone. Multiple studies have shown that "digital amnesia" or the "Google effect" is a consequence of having information readily available at our fingertips. It happens because we don't commit the information to memory in the knowledge that we can easily look it up again. According to The Decision Lab, this bias exists not only for things we look up on internet search engines but for most information that is easily accessible on our computers or phones. For example, most of us can't remember friends, family members or work colleagues' phone numbers by heart. So how can we help our brains to remember information? "Writing is thinking," said Professor Barry O'Sullivan from the School of Computer Science & IT at UCC. Professor O'Sullivan believes there are learning benefits associated with Large Language Models (LLMs), but his view is that we should be applying a precautionary principle to them as its such early days. "If you're not the one doing the writing then you're not the one doing the thinking, so if you're a student or you're the employee, the writing really does need to be yours," he said. LLMs with chatbots or virtual assistant chatbots such as Gemini or ChatGPT, have been trained on enormous amounts of text and data and their systems are capable of understanding and generating human language. With these rapid advances in AI certain tasks are now easier, and when used effectively can save time and money, in our personal and working lives. However, there are concerns about critical thinking, creativity and problem solving. Some AI companies claim their models are capable of genuine reasoning, but there's ongoing debate over whether their "chain of thought" or "reasoning" is trustworthy. According to Professor Barry O'Sullivan, these claims "just aren't true." "These large language models don't reason the same way as human beings," he said. "They have pretty weak abilities to reason mathematically, to reason logically, so it really is the big stepping stone, but it's always been the big stepping stone in AI." he added. He cautions people and workers to use these tools as an assistant, and as a sometimes "unreliable assistant". Professor O'Sullivan also warns AI generated answers could contain a biased view of the world, that is when human brain power is needed to apply sense, reasoning and logic to the data. The Google Search Engine was launched in 1998 and is considered a narrow form of AI. Since then, there have been many studies highlighting how the "Google Effect" is a real phenomenon and its impact on how we remember and learn. Launches such as Open AI's chatbot ChatGPT, and more recently Google's AI Overviews and AI Mode search are all relatively new, meaning there has been less time to study the effects. A new study by researchers at the media lab at Massachusetts institute of Technology (MIT) "Your Brain on ChatGPT", divided 54 people aged 18 to 39 into three groups to write essays. One group used Open AI's ChatGPT, one used Google's Search Engine, and the remaining group used no tools; they had to rely purely on brain power. While it was a very small study and in a very specific location (Boston), it found that of the three groups, ChatGPT users had the lowest brain engagement. While in the brain-only group, researchers found they were more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The study suggests the use of LLMs, such as ChatGPT which is what they used but is similar to other LLMs, could actually harm learning, especially for younger users. "The MIT study is one source, but there isn't any definitive evidence of that, however the growing amount of evidence does seem to be tipping on that side of the argument that the more we rely on very sophisticated reasoning systems that can automate the process of writing, the less we are thinking," said Professor O'Sullivan. "People should remember that writing is thinking, so when you when you give up the writing to somebody else, you're not thinking anymore and that does have a consequence." Is digital dependence shaping our brains? Whether in education or in the workplace, the use of AI is becoming increasingly prevalent. In a nutshell it does appear to be shaping our brains, but the debate continues over whether it's happening in a negative or a positive way. As a professor in UCC, Mr O'Sullivan ponders over whether table quizzes are as popular as they used to be, and how young people view need-to-know information. "You often hear students saying, "Well I don't really need to know that because if I was out on the job I'd Google it, and wouldn't that be just fine?" "It is to some extent fine for some pieces of information, but it's also important to know why the information is that way, or what the origin is, or why things are that way," Professor O'Sullivan said. There is a skill shift happening with how we and our brains engage with new technology. This is why human judgement and decision making is more important than ever according to Claire Cogan, behavioural scientist and founder of Behaviour Wise. "There is an effect (from AI) on how the brain learns, so there's an impact on brain health. Some of that is relevant to employers, and its very relevant to individuals," said Ms Cogan. AI is useful in the workplace when it can automate mundane or time-consuming tasks such as generating content and extracting key points from large amounts of data. Ms Cogan noted the theory is when people talk about the pluses and minuses, AI should free up time to allow people to do other things. "So as long as that balance is there, it's a good thing. If that balance isn't there, then it's simply going to have a negative impact," she stated. Referring to the MIT study, she assessed it found evidence that using AI can slow attention and have an impact on the working memory. "The brain will go for shortcuts, if there's an easier way to get to something, that's the way the brain will choose," said Ms Cogan. "However, there are still areas where human intelligence far outweighs anything AI can do, particularly around judgment and decision making, and they're going to become more and more important," she stated. "That's the side driven by people, so there's a whole new skill where people are going to have to learn how to judge when to use AI, how to use it, how to direct it, and how to manage it," she said. Does reliance on AI impact on critical thinking in the workplace? Since the late 90s people have been using search engines to find facts. With the advances and sophistication of AI people are becoming more wary, with real concerns about misinformation, disinformation and deep fakes. So while we are relying on AI tools to help find information, it's more important than ever that we engage core human skills in terms of decision making. Ms Cogan believes in an ideal world teachers and lecturers would be almost preparing people for what is going to happen in five or more years. "It's a particular skill to know when and when not to use AI, just teaching the value in decision making because ultimately the overall goal or the aim is defined by the person. There is a skill to making a good decision, and in a work context to know how to arrive at the best decision, that in itself is a whole topic in behavioural science," she said. What's next for our brains? "For our own sake, we need to nurture our brains and we need to look after our brain health," said Ms Cogan. "The best way we can do that is by remaining actively involved in that whole learning process," she said. The authors of the MIT study urged media to avoid certain vocabulary when talking about the paper and impact of generative AI on the brain. These terms included "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage" and "terrifying findings". It is very early days when it comes to learning about how these technological advances will impact on us in the long term. Maybe in the near future, AI will be able to summarise and analyse data from upcoming studies to tell us if it is rewiring our brains or making them lazy.

As AI improves, the temptation to use it grows, and our ability to think shrinks
As AI improves, the temptation to use it grows, and our ability to think shrinks

Irish Times

time12 hours ago

  • Irish Times

As AI improves, the temptation to use it grows, and our ability to think shrinks

In the revelatory closing pages of his 2024 novel Playground, Richard Powers has one of his narrators reflect on the arrival into the world of ChatGPT . We are at an indeterminate point in the future, and the technology is unnamed. But we recognise it from the familiar story that its 'overnight appearance rocked the world and divided humanity'. 'Some people saw glimmers of real understanding. Others saw only a pathetic pattern-completer committing all kinds of silly errors even a child wouldn't make.' It turns out, however, that this version of what we've learned to call a Large Language Model (LLM) was only the beginning. READ MORE Later improvements multiply the technology's power beyond imagination. Playground describes these developments with a tone of inevitability, as a question of when, rather than if. In a profile of him in the Spanish newspaper El País, I was struck by a single blunt statement by Powers: ' AI represents the complete victory of capital over labour'. Here was a sentiment – registered in a time-honoured Marxist vocabulary – that seemed absent from Playground, or at least significantly muted within it. The technology is mostly seen in the novel through the sympathetically rendered consciousness of a Big Tech magnate. Another strand considers AI's implications for the inhabitants of a small Pacific island. But its broader effects on the lives of workers or the market for jobs remain relatively unexplored. [ Tech shocks to industry have only just begun Opens in new window ] Richard Powers, author of 2024 novel Playground: 'AI represents the complete victory of capital over labour" This division between the prophetic speculations of Playground and the hard-nosed political analysis of its author is a version of the 'divided humanity' to which the novel alludes. Just as there are those who see 'glimmers of real understanding' in LLMs and are excited by the potential for untold technological breakthrough, there are others who view this 'pathetic pattern-completer' as robbing humans of the kind of cognitive labour that, on some accounts, is the core of who we are. Somewhere in the middle are the many – perhaps the overwhelming majority – for whom ChatGPT is just one more technology to adapt to, to use where one can, with the larger consequences of its widespread adoption lying somewhere out of sight. These thoughts have practical application to my day job as a lecturer in English (can you tell?) at University College Dublin. As in universities the world over, the last two years have seen an unprecedented challenge by LLM technology to our assessment procedures, which have until now centred on the analytical essay. Over recent decades, for a variety of good reasons, we have moved away from in-person exams towards more varied forms of assessment, but the take-home essay has remained at the heart of our pedagogy. All this now looks set to change. Unless you know a student well, marking an essay has become a battle between blind faith and cynicism. One of the most insidious effects of LLMs is that even good student work comes under suspicion as potentially the product of a machine. As the technology improves, the temptation to use it to game the system grows. This represents – to adopt Marxist language again – a form of alienation: the language we use is no longer our own. [ We need to talk about AI's staggering ecological impact Opens in new window ] And this is before we even reckon with the vast amounts of electricity and fresh water needed to power this technology, along with the exploited workers in the Global South that, as Karen Hao reports in Empire of AI, are required to improve its outcomes and clean up its language. Powers is right that AI represents the complete victory of capital over labour. That victory has been so fast and so discombobulating that it is hard to know on what ground to stand, never mind how to fight back. To consider AI part of a ruling class project is not to imagine it as a conspiracy, but simply to acknowledge that its effects align with the aims of the powerful. For all the talk about boosting productivity, authoritative studies have very quickly demonstrated that LLM use makes people less willing and able to think. The most widely reported study of this kind, conducted by the Media Lab at MIT, found lower levels of brain engagement among participants asked to use ChatGPT to write essays, by comparison with those writing them without such support. Other studies, this time in political science rather than neuropsychology, demonstrate that voting Republican in US elections is strongly correlated with not having a university degree. Since taking power, the Trump administration has launched an unprecedented war on universities, attacking not only humanities and social science programmes but also the previously untouchable architecture of STEM research. Top academics are leaving the US in droves. There is talk in Ireland and elsewhere of benefiting from this brain drain. [ 'Really scary territory': AI's increasing role in undermining democracy Opens in new window ] Top academics are leaving the US in droves. Photograph: Tierney L. Cross/ The New York Times At the same time, Trump and his cronies are pumping billions into AI research. This is not a coincidence. LLMs are the most effective tool ever created to curtail the traditional work of universities, the cultivation of critical individual minds. In these circumstances even those with university degrees, the pattern suggests, will become more likely to vote for Trump and his ilk. This in turn will enable further legislative abominations like the 'big beautiful bill,' which deprives the poorest of healthcare while lining the pockets of the already astronomically rich. In universities, we worry not only about student writing but about student reading. Anecdotal evidence suggests that students are becoming less willing and able to read the kinds of long novels that used to be the core of an English degree. This is not only a question of technology but of political economy, with many students now having to work part-time or even full time to support their studies. My colleagues and I have responded by gradually reducing the number of set texts on our courses. The sense of inevitability is hard to gainsay. I would love to have my students read and debate Playground, a novel that speaks to its moment like few artworks do. Is it too credulous about AI? Too cynical? Does it capture the world as they see it? Does it transform their perception of that world? But setting a multilayered and challenging text for students to read carries newfound risks. Asking them to write about such a text carries even more. Whirring away in the background, powered by looming data centres, ChatGPT stands ready to do a serviceable job in our absence. Dr Adam Kelly is associate professor in the School of English, Drama and Film at University College Dublin

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store