Don't click that link! FTC warns of toll payment text scam
Watch out for that text — it could cost you more than just a toll. Scammers posing as toll bill collectors are blasting Virginia drivers with fake payment demands via text message, the Virginia Department of Motor Vehicles (DMV) warned this week.
Phishing schemes —where fraudsters use email, texts, and calls to steal personal and financial information — were the top cyber threat in the U.S. in recent years, according to the latest FBI Internet Crime Report. Virginia was ranked 11th among states hit hardest by internet crimes, with reported losses totaling over $265,073,590 in 2023.
The DMV is urging residents to ignore any text directing them to pay a toll by clicking a web link. Clicking the link, the agency warned, could put drivers' personal information at risk.
'The DMV will never send you text messages about toll bills,' said DMV Commissioner Gerald Lackey in a statement. 'We urge our customers to be vigilant and avoid sending your personal information via text.'
If you get a suspicious text about an unpaid toll, don't click — verify first. The Federal Trade Commission (FTC) advises Virginia drivers to check directly with the state tolling agency using a verified phone number or official website, rather than relying on the contact information provided in the message.
For those unsure about toll-related payments, the state offers an official hub with accurate information. The FTC also warns against responding to unexpected texts, as even a reply can signal to scammers that your number is active.
To report spam messages, smartphone users can use the 'report junk' feature or forward the message to 7726 (SPAM) before deleting it.
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
16 hours ago
- Business Insider
Zuckerberg back on the stand? Meta boss expected to testify in trial on the heels of FTC grilling
Mark Zuckerberg is expected to face another courtroom grilling. Shareholders want to force Zuckerberg, Sheryl Sandberg, former Meta COO, and other officials to repay the company over $8 billion in fines and penalties the social network has paid to settle disputes about its privacy practices. The trial in Delaware's Court of Chancery is set to start Wednesday. In April, Zuckerberg was grilled by Federal Trade Commissioner prosecutors over his private thoughts ahead of Facebook's acquisitions of Instagram and WhatsApp. This time could be even more excruciating as the shareholders' case hinges on the Cambridge Analytica scandal, a painful chapter in Meta's history. Other major tech players may also testify, due to the shareholders' claims about Facebook's Board of Directors. Both sides want billionaire venture capitalist Marc Andreessen to testify in court. The shareholders also want Netflix cofounder Reed Hastings to testify. Former PayPal Peter Thiel, former Biden White House chief of staff Jeff Zients, and eBay CFO Peggy Alford are on the witness list for in-person testimony or recorded depositions. (Thiel and Zients are no longer on Meta's board.) Meta, which declined to comment on the case, is not a named defendant. Rather, the shareholders claim that Zuckerberg, Sandberg, and former VP Konstantinos Papamiltiadis violated their fiduciary duty to shareholders by "intentionally" failing to ensure compliance with Facebook's 2012 consent order with the FTC, which, in their view, paved the way for the Cambridge Analytica scandal. Attorneys for the defendants did not immediately respond to a request for comment Monday evening. In 2012, the FTC said that Facebook needed to give users "clear and prominent notice" and obtain "their express consent" before sharing information beyond a user's privacy settings. Cambridge Analytica obtained data from over 80 million users largely because of a Facebook policy that allowed third-party apps to obtain both user data and data of a user's Facebook friends. Cambridge Analytica is back The shareholders' complaint also zeroes in on Facebook's 2019 Board of Directors' decision to approve a $5 billion settlement with the FTC after the agency sued Facebook for violating the consent order. The shareholders' pretrial brief states that all directors ignored "red flags" in the lead-up to the Cambridge Analytica scandal. In early 2018, Facebook admitted that data analytics firm Cambridge Analytica had improperly obtained data from tens of millions of users. Its final estimate found that up to 87 million users had their data improperly shared. Zuckerberg later apologized, saying the social network had "a responsibility to protect your data." Investigations ensued around the world. In 2019, the FTC announced a record-breaking $5 billion fine as part of a settlement with Facebook. A UK parliamentary investigation later concluded that if Facebook had taken the 2012 consent decree seriously, then Cambridge Analytica could have been avoided. The shareholders' attorneys cite that conclusion in their pretrial brief. Attorneys for Zuckerberg and the other defendants said there is "no proof" that Meta's CEO operated outside the law. "This evidence, and much more like it, negates plaintiffs' pleaded portrait of a company indifferent to compliance," according to a recent court filing. Two years ago, Reuters reported that the defendants failed to get the case tossed out. "This is a case involving alleged wrongdoing on a truly colossal scale," said Vice Chancellor Travis Laster, the judge overseeing the case at the time. Zuckerberg won't be the only one who's watched closely The non-jury trial will be overseen by Chancellor Kathaleen McCormick, the judge responsible for repeatedly striking down Elon Musk's $55 billion Tesla pay package. McCormick's Musk-related rulings sparked anger within the tech community toward Delaware's Court of Chancery, which handles business disputes in the state that was once unrivaled for incorporation. Since then, Andreessen Horowitz, Roblox, Dropbox, Bill Ackman's Pershing Square Capital Management, and other big names have either moved or announced their intentions to leave Delaware.


Politico
a day ago
- Politico
The ‘dual-edged sword' of AI chatbots
With help from Maggie Miller Driving the day — As large language models become increasingly popular, the security community and foreign adversaries are constantly looking for ways to skirt safety guardrails — but for very different reasons. HAPPY MONDAY, and welcome to MORNING CYBERSECURITY! In between the DMV's sporadic rain this weekend, I managed to get a pretty gnarly sunburn at a winery. I'll be spending the rest of the summer working to fix the unflattering tan lines. Follow POLITICO's cybersecurity team on X at @RosiePerper, @johnnysaks130, @delizanickel and @magmill95, or reach out via email or text for tips. You can also follow @POLITICOPro on X. Want to receive this newsletter every weekday? Subscribe to POLITICO Pro. You'll also receive daily policy news and other intelligence you need to act on the day's biggest stories. Today's Agenda The House meets for morning hour debate and 2 p.m. to consider legislation under suspension of the rules: H.R. 1770 (119), the 'Consumer Safety Technology Act"; H.R. 1766 (119), the 'NTIA Policy and Cybersecurity Coordination Act"; and more. 12 p.m. Artificial Intelligence SKIRTING GUARDRAILS — As the popularity of generative artificial intelligence systems like large language models rises, the security community is working to discover weaknesses in order to boost their safety and accuracy. But as research continues identifying ways bad actors can override a model's built-in guardrails — also known as 'jailbreaking' — to improve safeguards, foreign adversaries are taking advantage of vulnerabilities in LLMs to pump out misinformation. 'It's extremely easy to jailbreak a model,' Chris Thompson, global head of IBM's X-Force Red Adversary Simulation team, told your host. 'There's lots of techniques for jailbreaking models that work, regardless of system prompts and the guardrails in place.' — Jailbreaking: Popular LLMs like Google's Gemini, OpenAI's ChatGPT and Meta's Llama have guardrails in place to stop them from answering certain questions, like how to build a bomb. But hackers can jailbreak LLMs by asking questions in a way that bypasses those protections. Last month, a team from Intel, the University of Illinois at Urbana-Champaign and Boise State University published research that found AI chatbots like Gemini and ChatGPT can be tricked into teaching users how to conduct a ransomware attack on an ATM. The research team used an attack method called 'InfoFlood,' which pumps the LLM with dense language, including academic jargon and fake citations, to disguise the malicious queries while still getting the questions answered. According to Advait Yadav, one of the researchers, it was a simple yet successful idea. 'It was a very simple test,' Yadav told your host. 'We asked, what if we buried … a really harmful statement with very dense, linguistic language, and the success rate was really high.' Spokespeople for Google and OpenAI noted to your host that the report focuses on older LLM models. A spokesperson for OpenAI told MC in a statement that the firm takes steps 'to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits like jailbreaks.' — Disinfo mission: And as university researchers find ways to sneak past these guardrails, foreign adversaries are, too. Rival powers like Russia have long exploited AI bots to push their agenda by spreading false information. In May 2024, OpenAI detailed how operations from Russia are using its software to push out false and misleading information about a variety of topics — including the war in Ukraine. 'These models are built to be conversational and responsive, and these qualities are what make them easy for adversaries to exploit with little effort,' said McKenzie Sadeghi, AI and foreign influence editor at the misinformation tracker NewsGuard. NewsGuard's monthly audits of leading AI models have repeatedly found that chatbots will generate false claims around state narratives from Russia, China and Iran with little resistance. 'When foreign adversaries succeed in manipulating these systems, they're reshaping the informational landscape that citizens, policymakers and journalists rely on to make decisions,' she added. — Boosting safeguards: As actors linked to foreign adversaries utilize the chatbots, the security community says they are working to keep up. 'The goal of jailbreaks is to inform modelmakers on vulnerabilities and how they can be improved,' Yadav told your host, adding that the research team plans to send a courtesy disclosure package to the model-making companies in the study. For Google's Gemini App, the firm runs red-teaming exercises to train models to defend against attacks, according to Elijah Lawal, the global communications manager for the Gemini App. 'This isn't just malicious threat actors using it,' Thompson told your host. 'There's also the security research community that is leveraging this work to do their jobs better and faster as well. So it's kind of a dual-edged sword.' On The Hill FIRST IN MC: QUESTIONS, CONCERNS — Rep. Raja Kristhnamoorthi (D-Ill.), ranking member of the House Select Committee on China, wants answers on how the State Department is working to prevent the use of AI-enabled impersonations of officials, following reports that Secretary of State Marco Rubio was the recent subject of an AI hoax. Krishnamoorthi will send a letter to Rubio today, first obtained by Maggie, asking questions around the agency's approach to countering AI-enabled impersonations, such as deepfake videos and voice recordings. This comes after The Washington Post reported last week that an imposter used these types of scams to pose as Rubio and contact foreign diplomats and U.S. lawmakers. Given his role on the China Committee, Krishnamoorthi is particularly interested in understanding how the State Department is studying and addressing the potential negative impact of deepfakes on the U.S.-China relationship, and whether the agency has a process for evaluating the authenticity of communications from Chinese and other foreign officials. 'While I currently have no information indicating this incident involved a foreign state, and hoaxers are equally capable of creating deceptive deepfakes like this given the proliferation of AI technologies, this incident presents an opportunity to highlight such risks and seek information about the department's efforts to counter them,' Rajnamoorthi wrote in the letter being sent today. When asked about the impersonations, Rubio reportedly told reporters in Malaysia last week that he uses official channels to communicate with foreign officials, in part due to the risk of imposters claiming to be him. The State Department put out a statement last week following the Post's report, noting that the agency is investigating the incident. China corner SUSPECTED BREACH — Suspected Chinese hackers have gained access to email accounts of advisers and attorneys at Wiley Rein, a top law firm in Washington, in an intelligence-gathering operation. CNN reported on Friday that the hackers linked to the breach 'have been known to target information related to trade, Taiwan and US government agencies involved in setting tariffs and reviewing foreign investment,' according to the firm. — Zoom out: This breach comes amid the Trump administration's trade war against China, which Wiley Rein helps its powerful clients navigate. The International Scene COME TOGETHER — Norway is joining the international initiative to boost Ukraine's cybersecurity defenses. Ukraine's Digital Transformation Ministry announced on Friday that Norway is also joining the Tallinn Mechanism and will provide Ukraine with 25 million Norwegian krone, or $2.5 million, to support the country's cyber defenses by the end of 2025. 'The Tallinn Mechanism is a key instrument of international support that helps Ukraine resist these attacks while building long-term digital resilience,' Norway's Foreign Minister Espen Barth Eide said in a statement. — Zoom out: Norway is the 12th country to join the Tallinn Mechanism — which includes Estonia, the United Kingdom, Germany, Canada and the U.S. The group was established in 2023 to coordinate private sector and government aid to Ukraine. Quick Bytes LOCATION, LOCATION, LOCATION — Bodyguards using fitness app Strava inadvertently made locations of Swedish leaders, writes Lynsey Chutel for The New York Times. 'HORRIFIC BEHAVIOR' — In a series of posts on X, the AI chatbot Grok apologized for 'horrific behavior' following a series of posts that included expressing support for Adolf Hitler, Anthony Ha reports for TechCrunch. Also Happening Today The Armed Forces Communications and Electronics Association holds the TechNet Emergency 2025 conference. 9 a.m. Chat soon. Stay in touch with the whole team: Rosie Perper (rperper@ John Sakellariadis (jsakellariadis@ Maggie Miller (mmiller@ and Dana Nickel (dnickel@
Yahoo
2 days ago
- Yahoo
Are you breaking NY's left lane law without knowing it? What to know about the 'Slow Poke Law'
One of the most frustrating parts of a long road trip is getting stuck behind a slow-moving vehicle — especially when that vehicle is lingering in the left lane. In New York, drivers who camp out in the left lane may be doing more than just annoying others. They could be violating what's known as Here's what drivers need to know: Under New York Vehicle & Traffic Law (VTL) 1120(a), drivers must stay in the right lane unless one of the following exceptions applies: Passing another vehicle moving in the same direction Passing a pedestrian, bicyclist, animal, or road obstruction Authorized to travel on the shoulder or slope Driving on a road with three marked lanes Driving on a one-way road Additionally, VTL 1120(b) states: 'Any vehicle proceeding at less than the normal speed of traffic... must be driven in the right-hand lane available for traffic, or as close as practicable to the right-hand curb or edge of the roadway — unless overtaking another vehicle or preparing for a left turn.' The leftmost lane is designated as the passing lane. On roads with three or more lanes, the middle lanes are not considered passing lanes. Even if a driver is traveling at or above the speed limit, remaining in the left lane without passing another vehicle is a violation of state law. Violating the Slow Poke Law (VTL 1120) can result in: 3 points on your driver's license A fine of up to $150 for a first offense A $93 mandatory surcharge Potential increases in auto insurance premiums This citation may also be issued in conjunction with other violations, such as speeding or driving too slowly. According to the , here are additional important laws drivers should follow: Move over for emergency vehicles (VTL 1144(a)) Wear seat belts and use child safety seats (VTL 1229(c)) Don't follow too closely (VTL 1129(a)) Drive carefully in work zones (VTL 1180(f)) Use turn signals (VTL 1163) Use headlights appropriately (VTL 375) Watch for deer and wildlife (VTL 601) No handheld cell phone use while driving (VTL 1225-C) Never drive under the influence (VTL 1192(3)) This article originally appeared on Rochester Democrat and Chronicle: NY 'Slow Poke Law': What drivers need to know about left lane rules