logo
Hawaiʻi Judiciary warns of text message scam over fake traffic citations

Hawaiʻi Judiciary warns of text message scam over fake traffic citations

Yahoo06-06-2025
HONOLULU (KHON2) — The Hawaiʻi State Judiciary is warning residents about a recent text message scam that falsely claims recipients owe money for traffic citations.
Scammers have been sending fraudulent messages to people across the state. These texts claim to be from the Department of Motor Vehicles (DMV) and threaten to suspend the recipient's driver's license and vehicle registration unless a payment is made.
Kona drivers: Watch out for this scam
The texts also mention a 'service fee' and falsely warn that credit scores may be negatively impacted if no payment is received.
The judiciary clarified that neither state courts nor the DMV initiate contact about unpaid traffic citations through text messages, phone calls or email — unless you first reached out directly with a specific inquiry. Instead, official notifications are sent through U.S. mail.State officials are reminding the public that legitimate court communications will never demand immediate payment through digital platforms or include threats tied to vehicle registration or credit status.
If you receive a suspicious message, do not respond or provide any personal information.
Instead, the judiciary recommends you report the scam to the FBI Internet Crime Complaint Center, the Federal Trade Commission (FTC), the U.S. General Services Administration or CrimeStoppers Honolulu.
For anyone uncertain about whether they may have unpaid citations, the judiciary encourages using eCourt Kokua, which is the Judiciaryʻs public online case look-up system.
Check out more news from around Hawaii
Using that system, one can search by their full name or license plate number to verify the status of any citations. The 'case search' feature allows users to find detailed information about any pending or resolved traffic matters.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Tesla and California's DMV are facing off, over the car company's self-driving claims
Tesla and California's DMV are facing off, over the car company's self-driving claims

San Francisco Chronicle​

time5 days ago

  • San Francisco Chronicle​

Tesla and California's DMV are facing off, over the car company's self-driving claims

The fate of Tesla 's business in California, at least for the next 30 days, could be decided in a stuffy second-floor hearing room in Oakland. There, attorneys for the electric car company and the Department of Motor Vehicles are facing off this week before an administrative judge, over claims that Tesla deceived consumers with its autopilot and self-driving features. Officials at the DMV filed those allegations in July 2022 and amended them in November 2023, seeking to suspend Tesla's licenses to manufacture and sell vehicles in California for at least 30 days. Additionally, the department is pursuing a court order for the electric vehicle to pay an undetermined sum in restitution. In court filings, attorneys for the state Department of Justice have cited four phrases or product descriptions from Tesla's website that state officials describe as misleading or that amount to false advertising. These include: 'autopilot'; 'full self-driving capability'; a promise that the system 'is designed to be able to conduct short and long-distance trips with no action required by the person in the driver's seat'; and claims that cars can effectively drive people to their destinations, with the vehicle navigating streets, freeways and intersections and then automatically parking itself. 'These labels and descriptions represent specifically that respondent (Tesla)'s vehicles will operate as autonomous vehicles, which they could not and cannot do,' Attorney General Rob Bonta wrote in a July 17 brief. Attorneys for Tesla argue, to the contrary, that while the company's driver assistance technology qualifies as 'state of the art,' the company 'has always made clear' that its vehicles are not fully autonomous, and that they require 'active driver supervision' from a human. As this case proceeds through administrative court in Oakland, Tesla is facing a separate federal trial in Miami that threatens to fell its autopilot system and its brand image. The Miami case centers on a 2019 fatal crash of a Tesla Model S sedan with its autopilot engaged. According to court documents, the Tesla driver had bent to pick up a cell phone he had dropped when suddenly his car rammed into a parked SUV, killing one person and seriously injuring another.

New text scam in Colorado pretends to be DMV employee, alleges unpaid tickets
New text scam in Colorado pretends to be DMV employee, alleges unpaid tickets

CBS News

time18-07-2025

  • CBS News

New text scam in Colorado pretends to be DMV employee, alleges unpaid tickets

A new type of scam text message looks like it's coming from the Colorado DMV. But the goal of all scammers is the same -- separating you from your money. Lakewood resident Lauren Perrin almost got hit with it before asking her father to take a look at the message. It saved her from potentially clicking a bad link or sending money somewhere. "I had to ask two people, and the verbiage was very formal," Perrin told CBS Colorado. "It definitely fit the way that it would be sent if someone here said this." The texts have ended up everywhere, even in our CBS Colorado newsroom, where many of our coworkers have received the scam. The DMV says the texts have started to become more prevalent lately. When a person receives the text, it says they have overdue tickets to be paid, and, if they do not do so soon, there may be more penalties. "I think my text said I had one day to get all the tickets I never paid," Perrin joked. "But it came from a random number. It wasn't like 1-800. I actually asked my Dad and he said forget about it" In a statement to CBS Colorado, the DMV suggests not clicking any links to a suspicious text, not sharing any personal information or replying to the message at all. Those that responded to our question on the CBS Colorado Facebook page said they largely received and deleted them. If you have been affected by a scam text, the DMV suggests changing your passwords, contacting your bank or financial institution, consider a fraud alert and staying generally vigilant. Perrin now has a trained eye as well as a unique strategy for sussing out scammers going forward. "They're probably using ChatGPT or AI to make these texts, so I would run it through there to see if it was," Perrin said. "And just ask your friends."

The ‘dual-edged sword' of AI chatbots
The ‘dual-edged sword' of AI chatbots

Politico

time14-07-2025

  • Politico

The ‘dual-edged sword' of AI chatbots

With help from Maggie Miller Driving the day — As large language models become increasingly popular, the security community and foreign adversaries are constantly looking for ways to skirt safety guardrails — but for very different reasons. HAPPY MONDAY, and welcome to MORNING CYBERSECURITY! In between the DMV's sporadic rain this weekend, I managed to get a pretty gnarly sunburn at a winery. I'll be spending the rest of the summer working to fix the unflattering tan lines. Follow POLITICO's cybersecurity team on X at @RosiePerper, @johnnysaks130, @delizanickel and @magmill95, or reach out via email or text for tips. You can also follow @POLITICOPro on X. Want to receive this newsletter every weekday? Subscribe to POLITICO Pro. You'll also receive daily policy news and other intelligence you need to act on the day's biggest stories. Today's Agenda The House meets for morning hour debate and 2 p.m. to consider legislation under suspension of the rules: H.R. 1770 (119), the 'Consumer Safety Technology Act"; H.R. 1766 (119), the 'NTIA Policy and Cybersecurity Coordination Act"; and more. 12 p.m. Artificial Intelligence SKIRTING GUARDRAILS — As the popularity of generative artificial intelligence systems like large language models rises, the security community is working to discover weaknesses in order to boost their safety and accuracy. But as research continues identifying ways bad actors can override a model's built-in guardrails — also known as 'jailbreaking' — to improve safeguards, foreign adversaries are taking advantage of vulnerabilities in LLMs to pump out misinformation. 'It's extremely easy to jailbreak a model,' Chris Thompson, global head of IBM's X-Force Red Adversary Simulation team, told your host. 'There's lots of techniques for jailbreaking models that work, regardless of system prompts and the guardrails in place.' — Jailbreaking: Popular LLMs like Google's Gemini, OpenAI's ChatGPT and Meta's Llama have guardrails in place to stop them from answering certain questions, like how to build a bomb. But hackers can jailbreak LLMs by asking questions in a way that bypasses those protections. Last month, a team from Intel, the University of Illinois at Urbana-Champaign and Boise State University published research that found AI chatbots like Gemini and ChatGPT can be tricked into teaching users how to conduct a ransomware attack on an ATM. The research team used an attack method called 'InfoFlood,' which pumps the LLM with dense language, including academic jargon and fake citations, to disguise the malicious queries while still getting the questions answered. According to Advait Yadav, one of the researchers, it was a simple yet successful idea. 'It was a very simple test,' Yadav told your host. 'We asked, what if we buried … a really harmful statement with very dense, linguistic language, and the success rate was really high.' Spokespeople for Google and OpenAI noted to your host that the report focuses on older LLM models. A spokesperson for OpenAI told MC in a statement that the firm takes steps 'to reduce the risk of malicious use, and we're continually improving safeguards to make our models more robust against exploits like jailbreaks.' — Disinfo mission: And as university researchers find ways to sneak past these guardrails, foreign adversaries are, too. Rival powers like Russia have long exploited AI bots to push their agenda by spreading false information. In May 2024, OpenAI detailed how operations from Russia are using its software to push out false and misleading information about a variety of topics — including the war in Ukraine. 'These models are built to be conversational and responsive, and these qualities are what make them easy for adversaries to exploit with little effort,' said McKenzie Sadeghi, AI and foreign influence editor at the misinformation tracker NewsGuard. NewsGuard's monthly audits of leading AI models have repeatedly found that chatbots will generate false claims around state narratives from Russia, China and Iran with little resistance. 'When foreign adversaries succeed in manipulating these systems, they're reshaping the informational landscape that citizens, policymakers and journalists rely on to make decisions,' she added. — Boosting safeguards: As actors linked to foreign adversaries utilize the chatbots, the security community says they are working to keep up. 'The goal of jailbreaks is to inform modelmakers on vulnerabilities and how they can be improved,' Yadav told your host, adding that the research team plans to send a courtesy disclosure package to the model-making companies in the study. For Google's Gemini App, the firm runs red-teaming exercises to train models to defend against attacks, according to Elijah Lawal, the global communications manager for the Gemini App. 'This isn't just malicious threat actors using it,' Thompson told your host. 'There's also the security research community that is leveraging this work to do their jobs better and faster as well. So it's kind of a dual-edged sword.' On The Hill FIRST IN MC: QUESTIONS, CONCERNS — Rep. Raja Kristhnamoorthi (D-Ill.), ranking member of the House Select Committee on China, wants answers on how the State Department is working to prevent the use of AI-enabled impersonations of officials, following reports that Secretary of State Marco Rubio was the recent subject of an AI hoax. Krishnamoorthi will send a letter to Rubio today, first obtained by Maggie, asking questions around the agency's approach to countering AI-enabled impersonations, such as deepfake videos and voice recordings. This comes after The Washington Post reported last week that an imposter used these types of scams to pose as Rubio and contact foreign diplomats and U.S. lawmakers. Given his role on the China Committee, Krishnamoorthi is particularly interested in understanding how the State Department is studying and addressing the potential negative impact of deepfakes on the U.S.-China relationship, and whether the agency has a process for evaluating the authenticity of communications from Chinese and other foreign officials. 'While I currently have no information indicating this incident involved a foreign state, and hoaxers are equally capable of creating deceptive deepfakes like this given the proliferation of AI technologies, this incident presents an opportunity to highlight such risks and seek information about the department's efforts to counter them,' Rajnamoorthi wrote in the letter being sent today. When asked about the impersonations, Rubio reportedly told reporters in Malaysia last week that he uses official channels to communicate with foreign officials, in part due to the risk of imposters claiming to be him. The State Department put out a statement last week following the Post's report, noting that the agency is investigating the incident. China corner SUSPECTED BREACH — Suspected Chinese hackers have gained access to email accounts of advisers and attorneys at Wiley Rein, a top law firm in Washington, in an intelligence-gathering operation. CNN reported on Friday that the hackers linked to the breach 'have been known to target information related to trade, Taiwan and US government agencies involved in setting tariffs and reviewing foreign investment,' according to the firm. — Zoom out: This breach comes amid the Trump administration's trade war against China, which Wiley Rein helps its powerful clients navigate. The International Scene COME TOGETHER — Norway is joining the international initiative to boost Ukraine's cybersecurity defenses. Ukraine's Digital Transformation Ministry announced on Friday that Norway is also joining the Tallinn Mechanism and will provide Ukraine with 25 million Norwegian krone, or $2.5 million, to support the country's cyber defenses by the end of 2025. 'The Tallinn Mechanism is a key instrument of international support that helps Ukraine resist these attacks while building long-term digital resilience,' Norway's Foreign Minister Espen Barth Eide said in a statement. — Zoom out: Norway is the 12th country to join the Tallinn Mechanism — which includes Estonia, the United Kingdom, Germany, Canada and the U.S. The group was established in 2023 to coordinate private sector and government aid to Ukraine. Quick Bytes LOCATION, LOCATION, LOCATION — Bodyguards using fitness app Strava inadvertently made locations of Swedish leaders, writes Lynsey Chutel for The New York Times. 'HORRIFIC BEHAVIOR' — In a series of posts on X, the AI chatbot Grok apologized for 'horrific behavior' following a series of posts that included expressing support for Adolf Hitler, Anthony Ha reports for TechCrunch. Also Happening Today The Armed Forces Communications and Electronics Association holds the TechNet Emergency 2025 conference. 9 a.m. Chat soon. Stay in touch with the whole team: Rosie Perper (rperper@ John Sakellariadis (jsakellariadis@ Maggie Miller (mmiller@ and Dana Nickel (dnickel@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store