
'Quishing' scams dupe millions of Americans as cybercriminals turn the QR code bad
"As with many technological advances that start with good intentions, QR codes have increasingly become targets for malicious use. Because they are everywhere — from gas pumps and yard signs to television commercials — they're simultaneously useful and dangerous," said Dustin Brewer, senior director of proactive cybersecurity services at BlueVoyant.
Brewer says that attackers exploit these seemingly harmless symbols to trick people into visiting malicious websites or unknowingly share private information, a scam that has become known as "quishing."
The increasing prevalence of QR code scams prompted a warning from the Federal Trade Commission earlier this year about unwanted or unexpected packages showing up with a QR code that when scanned "could take you to a phishing website that steals your personal information, like credit card numbers or usernames and passwords. It could also download malware onto your phone and give hackers access to your device."
State and local advisories this summer have reached across the U.S., with the New York Department of Transportation and Hawaii Electric warning customers about avoiding QR code scams.
The appeal to cybercriminals lies in the relative ease with which the scam operates: slap a fake QR code sticker on a parking meter or a utility bill payment warning and rely on urgency to do the rest.
"The crooks are relying on you being in a hurry and you needing to do something," said Gaurav Sharma, a professor in the department of electrical and computer engineering at the University of Rochester.
Sharma expects QR scams to increase as the use of QR codes spreads. Another reason QR codes have increased in popularity with scammers is that more safeguards have been put into place to tamp down on traditional email phishing campaigns. A study this year from cybersecurity platform KeepNet Labs found that 26 percent of all malicious links are now sent via QR code. According to cybersecurity company, NordVPN, 73% of Americans scan QR codes without verification, and more than 26 million have already been directed to malicious sites.
"The cat and mouse game of security will continue and that people will figure out solutions and the crooks will either figure out a way around or look at other places where the grass is greener," Sharma said.
Sharma is working to develop a "smart" QR code called a SDMQR (Self-Authenticating Dual-Modulated QR) that has built-in security to prevent scams. But first, he needs buy-in from Google and Microsoft, the companies that build the cameras and control the camera infrastructure. Companies putting their logos into QR codes isn't a fix because it can cause a false sense of security, and that criminals can usually simply copy the logos, he said.
Some Americans are wary of the increasing reliance on QR codes.
"I'm in my 60s and don't like using QR codes," said Denise Joyal of Cedar Rapids, Iowa. "I definitely worry about security issues. I really don't like it when one is forced to use a QR code to participate in a promotion with no other way to connect. I don't use them for entertainment-type information."
Institutions are also trying to fortify their QR codes against intrusion.
Natalie Piggush, spokeswoman for the Children's Museum of Indianapolis, which welcomes over one million visitors a year, said their IT staff began upgrading their QR codes a couple of years ago to protect against what has become an increasingly significant threat.
"At the museum, we use stylized QR codes with our logo and colors as opposed to the standard monochrome codes. We also detail what users can expect to see when scanning one of our QR codes, and we regularly inspect our existing QR codes for tampering or for out-of-place codes," Piggush said.
Museums are usually less vulnerable than places like train stations or parking lots because scammers are looking to collect cash from people expecting to pay for something. A patron at a museum is less likely to expect to pay, although Sharma said even in those settings, fake QR codes can be deployed to install malware on someone's phone.
QR code scams are likely to hit both Apple and Android devices, but iPhone users may be slightly more likely to fall victim to the crime, according to a study completed earlier this year by Malwarebytes. Users of iPhones expressed more trust in their devices than Android owners and that, researchers say, could cause them to let down their guard. For example, 70% of iPhone users have scanned a QR code to begin or complete a purchase versus 63% of Android users who have done the same.
Malwarebytes researcher David Ruiz wrote that trust could have an adverse effect, in that iPhone users do not feel the need to change their behavior when making online purchases, and they have less interest in (or may simply not know about) using additional cybersecurity measures, like antivirus. Fifty-five percent of iPhone users trust their device to keep them safe, versus 50 percent of Android users expressing the same sentiment.
A QR code is more dangerous than a traditional phishing email because users typically can't read or verify the encoded web address. Even though QR codes normally include human-readable text, attackers can modify this text to deceive users into trusting the link and the website it directs to. The best defense against them is to not scan unwanted or unexpected QR codes and look for ones that display the URL address when you scan it.
Brewer says cybercriminals have also been leveraging QR codes to infiltrate critical networks.
"There are also credible reports that nation-state intelligence agencies have used QR codes to compromise messaging accounts of military personnel, sometimes using software like Signal that is also open to consumers," Brewer said. Nation-state attackers have even used QR codes to distribute remote access trojans (RATs) — a type of malware designed to operate without a device owner's consent or knowledge — enabling hackers to gain full access to targeted devices and networks.
Still, one of the most dangerous aspects of QR codes is how they are part of the fabric of everyday life, a cyberthreat hiding in plain sight.
"What's especially concerning is that legitimate flyers, posters, billboards, or official documents can be easily compromised. Attackers can simply print their own QR code and paste it physically or digitally over a genuine one, making it nearly impossible for the average user to detect the deception," Brewer said.
Rob Lee, chief of research, AI, and emerging threats at the cybersecurity training focused SANS Institute, says that QR code compromise is just another tactic in a long line of similar strategies in the cybercriminal playbook.
"QR codes weren't built with security in mind, they were built to make life easier, which also makes them perfect for scammers," Lee said. "We've seen this playbook before with phishing emails; now it just comes with a smiley pixelated square. It's not panic-worthy yet, but it's exactly the kind of low-effort, high-return tactic attackers love to scale."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
8 minutes ago
- Business Wire
Brett McGurk Joins Tidal Partners as Senior Advisor
PALO ALTO, Calif.--(BUSINESS WIRE)--Tidal Partners is pleased to welcome Brett McGurk as a Senior Advisor to the firm. Brett brings decades of experience in global diplomacy and national security—with a track record forged at the center of some of the most consequential geopolitical events of our time. "The quest for AI leadership has inextricably linked geopolitics and technology. For today's leading companies, this moment presents a rare opportunity to create meaningful value—or risk falling behind," said David Handler, Co-Founder of Tidal Partners. "Brett's deep global and strategic experience enhances Tidal's role in driving outsized outcomes for our clients in today's fast-moving and uncertain environment." "I'm honored to lend my experience to Tidal Partners' distinguished clients as they wrestle with what could be the most significant decision-making of their careers," said Brett McGurk. "This is a transformational, hinge-point moment, with unprecedented risks and opportunities for our country and companies navigating the AI revolution." Brett is a veteran diplomat and national security expert with decades of service across four administrations leading high-stakes, breakthrough negotiations in the most complex geostrategic environments. His distinguished career includes serving as White House Coordinator for the Middle East and North Africa in the Biden Administration where he negotiated two Gaza ceasefires, securing the release of over 100 hostages, organized the defense of Israel against Iranian missile attacks, negotiated the release of American hostages from Iran, and managed highest-level U.S. relations with the Gulf states, including UAE and Saudi Arabia, at the leading edge of the global AI competition. Brett also served as Special Presidential Envoy for the Global Coalition to Defeat ISIS under the Obama and Trump administrations. In that role, he helped to design and lead the diplomatic and military campaign to defeat ISIS through a historic coalition of more than 80 countries that remains in place to this day. In the Obama administration, Brett also led secret negotiations with Iran to secure the release of multiple American hostages, later receiving the James W. Foley Hostage Freedom Award for these efforts. In addition, Brett held senior posts in the George W. Bush administration. As Senior Director on the National Security Council, he is credited for helping to develop the "surge" strategy for Iraq in 2007, and he later negotiated the Strategic Framework Agreement that continues to guide relations between Iraq and the United States. Brett had earlier spent nearly a year in Iraq as a legal advisor during the transition to an interim Iraqi government and helped negotiate Iraq's interim constitution. Early in his career, Brett served as law clerk to Chief Justice William H. Rehnquist. He was at the Supreme Court on September 11, 2001, a day that changed history and led to his dedicating over two decades to nonpartisan service in Washington and overseas. Currently, Brett is a Global Affairs Analyst for CNN, a Distinguished Fellow at Harvard's Belfer Center and the Atlantic Council, where he advises the N7 Initiative. He is also a Venture Partner at Lux Capital and Senior Advisor for the Middle East and International Affairs at Cisco Systems, Inc. He is finally writing a book about presidential decision-making and high-stakes diplomacy to be published by Crown. Tidal Partners is an M&A strategic advisory boutique dedicated to driving long-term value creation for leading technology companies in an era of global and digital transformation.


Newsweek
9 minutes ago
- Newsweek
Wearable Sensor Will Improve Bipolar Disorder Treatment
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. A wearable sensor could make tracking medication levels much easier for people living with bipolar disorder, eliminating the need for blood draws and lab analyses. The first-of-its-kind device could vastly improve treatment, convenience and drug safety for millions of patients who take lithium—a type of mood stabilizer—for bipolar disorder. While lithium is highly effective, it must be administered precisely and needs regular monitoring. Too low a dose, and the treatment doesn't work—but too high a dose can be toxic to patients, with the potential to cause kidney and thyroid damage, or even death. In addition, the correct dose of lithium depends on the individual. In their study, the researchers from the University of Southern California (USC) argue that sweat is an easier alternative to blood for lithium tracking because it can both be collected noninvasively and reflect lithium concentrations in real time. Printed and flexible wearable sensor for lithium monitoring in sweat being stuck to skin. Printed and flexible wearable sensor for lithium monitoring in sweat being stuck to skin. Mohammad Shafiqul Islam, Khan Lab, USC Viterbi School of Engineering Bipolar disorder affects around 5.7 million adult Americans, or about 2.6 percent of the U.S. population aged 18 and older each year, according to the National Institute of Mental Health. The current method of laboratory-based blood draws for tracking can be time-consuming, inconvenient and painful. "While past approaches required collecting body fluids for lab testing or using multiple devices—such as stimulating sweat, applying a color-changing lithium sensor, and then photographing it to estimate levels—our wearable simplifies everything," USC electrical and computer engineer professor Yasser Khan told Newsweek. "It collects sweat and measures lithium levels in one fully integrated, non-invasive device." The newly designed wearable includes a skin-safe, electrical current-based system to induce sweat without requiring physical exertion. In just minutes, the device can collect the data and transmit it directly to a smartphone app, allowing people to track their lithium levels from home (or wherever they might be). This reduces the need for more invasive tracking, similarly to a glucose monitor reducing the need for people with diabetes to check their blood glucose levels with a finger prick test, though they work slightly differently. "Unlike continuous glucose monitors (CGMs) that rely on a needle under the skin, our wearable is sweat-based and sticks on the skin like a simple bandage. While CGMs don't require a sample collection step, our device needs to first stimulate sweat before measuring lithium levels, adding one step, but avoiding needles entirely," Khan explained. The wearable sensor to track lithium has been made possible by the innovative use of something called 'organic electrochemical transistors' (OECTs) specifically designed for lithium detection, according to the researchers. OECTs are electronic devices that respond to ionic signals in liquid, converting them into readable electronic data. Unlike conventional OECTs, the sensor developed at USC features a fully printed OECT using a new material formulation tailored to detect lithium ions specifically. "We developed the entire lithium monitoring system—from the OECT-based sensor patch and on-demand sweat induction to the readout electronics and smartphone app—using a simple, scalable and cost-effective fabrication process," said study author and doctoral student Mohammad Shafiqul Islam in a statement. "Our goal was to make lithium tracking as easy and comfortable for patients as checking a daily fitness tracker." This makes it the first OECT-based lithium sensor to be entirely printed, which is hoped to pave the way for affordable and scalable production. In partnership with psychiatrist Adam Frank at Keck School of Medicine, the device has already been tested by those of his patients taking lithium—gaining positive feedback. Sweat samples were collected using the wearable device and lithium measurements were successfully matched against values derived from bulky commercial sensors. "We conducted a pilot study with three patients, focusing primarily on showcasing the wearable's development. All participants appreciated the convenience of using a simple at-home device compared to frequent lab visits for blood draws," Khan told Newsweek. "The wearable measures lithium levels and displays the concentration on a smartphone, making it accessible to both patients and physicians. Over time, patients can better understand what works for them—for example, one participant noted that a concentration around 0.4 mM was effective. This kind of personalized insight is one of the key benefits of wearable technologies." The new device is hoped to improve safety by allowing for medication dose adjustments that avoid side effects and potential medication toxicity. Keen to keep the momentum going for those living with bipolar disorder, Khan said: "This is a market-ready technology that now needs to be translated from the lab into a consumer device. To reach widespread adoption, a larger clinical study involving hundreds of patients will be essential." The team plans to develop more advanced wearable systems powered by AI to automatically adjust lithium dosage and achieve optimal therapeutic benefits without causing lithium toxicity. Do you have a health story to share with Newsweek? Do you have a question about bipolar disorder? Let us know via health@ Reference Islam, M. S., Kunnel, B. P., Ferdoushi, M., Hassan, M. F., Cha, S., Cai, W., Frank, A., & Khan, Y. (2025). Wearable organic-electrochemical-transistor-based lithium sensor for precision mental health. Device.

an hour ago
Creating realistic deepfakes is getting easier. Fighting back may take even more AI
WASHINGTON -- The phone rings. It's the secretary of state calling. Or is it? For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump's administration. Digital fakes are coming for corporate America, too, as criminal gangs and hackers associated with adversaries including North Korea use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets. Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age. Responding to the challenge will require laws, better digital literacy and technical solutions that fight AI with more AI. 'As humans, we are remarkably susceptible to deception,' said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. But he believes solutions to the challenge of deepfakes may be within reach: 'We are going to fight back.' This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voice mail and the Signal messaging app. In May someone impersonated Trump's chief of staff, Susie Wiles. Another phony Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine's access to Elon Musk's Starlink internet service. Ukraine's government later rebutted the false claim. The national security implications are huge: People who think they're chatting with Rubio or Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy. 'You're either trying to extract sensitive secrets or competitive information or you're going after access, to an email server or other sensitive network," Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations. Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state's upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI. Their ability to deceive makes AI deepfakes a potent weapon for foreign actors. Both Russia and China have used disinformation and propaganda directed at Americans as a way of undermining trust in democratic alliances and institutions. Steven Kramer, the political consultant who admitted sending the fake Biden robocalls, said he wanted to send a message of the dangers deepfakes pose to the American political system. Kramer was acquitted last month of charges of voter suppression and impersonating a candidate. 'I did what I did for $500,' Kramer said. 'Can you imagine what would happen if the Chinese government decided to do this?' The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud. 'The financial industry is right in the crosshairs," said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. 'Even individuals who know each other have been convinced to transfer vast sums of money.' In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers. Deepfakes can also allow scammers to apply for jobs — and even do them — under an assumed or fake identity. For some this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time. Authorities in the U.S. have said that thousands of North Koreans with information technology skills have been dispatched to live abroad, using stolen identities to obtain jobs at tech firms in the U.S. and elsewhere. The workers get access to company networks as well as a paycheck. In some cases, the workers install ransomware that can be later used to extort even more money. The schemes have generated billions of dollars for the North Korean government. Within three years, as many as 1 in 4 job applications is expected to be fake, according to research from Adaptive Security, a cybersecurity company. 'We've entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,' said Brian Long, Adaptive's CEO. 'It's no longer about hacking systems — it's about hacking trust.' Researchers, public policy experts and technology companies are now investigating the best ways of addressing the economic, political and social challenges posed by deepfakes. New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers could also impose greater penalties on those who use digital technology to deceive others — if they can be caught. Greater investments in digital literacy could also boost people's immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers. The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person. Systems like Pindrop's analyze millions of datapoints in any person's speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance. Similar programs may one day be commonplace, running in the background as people chat with colleagues and loved ones online. Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop's CEO.