
Urgent rental warning as car giant exposed using James Bond tech to sting customer $440
Before returning his rented Volkswagen to Hertz, a driver steered his car through a high-tech scanner system.
Arched LED lights and AI-enabled cameras scanned the vehicle's fenders and panels for scratches, wheels for scuffs, tires for tread wear, windows for cracks, and undercarriage for damage.
According to Hertz, the system is designed to speed up inspections and reduce disputes.
But minutes after pulling the VW through the automated scanner, the driver says he received a $440 bill — for a one-inch scrape on a wheel.
The charge came after Hertz began rolling out the technology as part of its partnership with UVeye, an AI startup that builds automated vehicle inspection systems.
The renter, identified only as Patrick, said the total included $250 for the repair, $125 for processing, and a $65 administrative fee, according to The Drive.
He isn't alone. Another renter on Reddit claimed they were billed $195 for a minor 'ding' after returning a Toyota Corolla to a Hertz location equipped with the same AI system.
In April, Hertz representatives told DailyMail.com that the newly-implemented machines would not be used to bill customers for minor cosmetic issues.
Instead, they said the system was intended to prioritize safety and maintenance — not penalties.
A representative for Hertz verified Patrick's story and said the company's policy has not changed. They're standing by the $440 charge.
Now, the company confirms it uses a 1-inch standard for dents — roughly the size of a golf ball — when determining whether to issue a damage charge.
'Over 97 percent of cars scanned with this technology have not detected any billable damage, proving a vast majority of rentals are incident-free,' a spokesperson for the company said.
'Vehicle damage has long been a common pain point across the car rental industry for customers and companies alike.
'At Hertz, we're using this technology to address this friction head-on and our goal is to always improve the customer experience while ensuring customers are not charged for damages they did not cause and by bringing greater transparency, precision, and speed to the process when they do.'
Independent analysts told DailyMail.com that Hertz's AI rollout reflects a growing tension between company's tech solutions and a customer's service expectations.
'A line is crossed when AI applications become overly aggressive and prioritize efficiency over customer fairness and satisfaction,' David Linthicum, an AI analyst, said.
Independent analysts are concerned that the advancing use of AI could frustrate customers
'Customers value fairness and human interaction.'
Still, Hertz believes the new systems will make damage fees more transparent.
Traditionally, car rental companies relied on employees to inspect vehicles, a process that had its limitations — especially when it came to detecting undercarriage damage or worn tires.
UVeye says its scanners apply a consistent, fleet-wide standard to inspections, improving accuracy and fairness.
'Hertz is setting a new standard for vehicle maintenance and fleet management in the rental industry,' Amir Hever, the CEO and Co-Founder of UVeye, said.
'Our AI-driven inspection systems complement manual checks with consistent, data-backed assessments completed in seconds.'
But while the technology may be more consistent, some drivers who've been hit with fees say they're finished with Hertz altogether.
'I will no longer be using Hertz,' the Corolla renter said on Reddit. 'Reached out to customer service, and they said they stand by the AI.'
Hertz is the second largest rental vehicle fleet in the US
Its the latest major change from Hertz that has ruffled some customer feathers.
In 2022, the rental company purchased thousands of Tesla and Polestar EVs as it attempted to entice trendy customers.
But vacationers, who didn't want to navigate America's frustrating public charging infrastructure, infrequently rented the EVs. Hertz started selling the cars at a loss.
In the positive direction, the company also started to gain a lot of attention on Wall Street after billionaire investor Bill Ackman said he started purchasing stock.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


BBC News
44 minutes ago
- BBC News
Lotus Cars has 'no plans' to close any factory
Sportscar maker Lotus has declared it has "no plans" to close any factory after it emerged the company was considering setting up a new plant in the BBC understands the iconic manufacturer had been considering ending production at its headquarters in Hethel, Norfolk, which would put 1,300 jobs at a statement on X, it said: "Lotus Cars is continuing normal operations, there are no plans to close any factory," but admitted it was "actively exploring" options in the global story was first reported by the Financial Times, but sources within the company have told the BBC the situation was under review and they were considering taking production to the US. It comes after production in Hethel was temporarily suspended due to disruption caused by the introduction of tariffs on cars being imported to the is a major market for Lotus but tariffs threaten its business, as sellers in the US are required to pay 25% on imports of cars and car statement added: "Lotus remains committed to the UK, to our customers, employees, dealers, suppliers, as well as our proud British heritage." Follow Norfolk news on BBC Sounds, Facebook, Instagram and X.


Geeky Gadgets
2 hours ago
- Geeky Gadgets
Build a Local n8n AI Agents for Free : Private Offline AI Assistant
What if you could harness the power of advanced AI models without ever relying on external servers or paying hefty subscription fees? Imagine running intelligent agents directly on your own computer, with complete control over your data and workflows tailored to your exact needs. It might sound like a dream reserved for tech giants, but it's now entirely possible—and surprisingly simple. By using tools like Docker and an open source AI starter kit, you can set up a privacy-focused AI ecosystem in just two straightforward steps. Whether you're a developer, a data enthusiast, or simply curious about AI, this guide will show you how to take control of your automation journey. In this tutorial by Alex Followell, you'll discover how to install and configure a local AI environment that's both powerful and cost-free. From deploying versatile tools like n8n for workflow automation to running large language models such as Llama entirely offline, this setup offers unmatched flexibility and security. You'll also learn about the key components—like PostgreSQL for data storage and Quadrant for advanced search—that make this system robust and scalable. By the end, you'll not only have a functional AI setup but also a deeper understanding of how to customize it for your unique goals. Could this be the most empowering step toward AI independence? Let's explore. Run AI Locally Guide 1: Install Docker The first step to creating your local AI environment is to install Docker, a robust container management platform that allows you to run and manage isolated software environments on your computer. Docker Desktop is recommended for most users due to its intuitive interface and cross-platform compatibility. Download Docker Desktop from the official Docker website. Follow the installation instructions for your operating system (Windows, macOS, or Linux). Verify the installation by opening a terminal and running the command docker --version . Docker acts as the backbone of your local AI setup, making sure that all components operate seamlessly within isolated containers. Once installed, you'll use Docker to deploy and manage the tools required for your AI workflows. 2: Clone the AI Starter Kit After installing Docker, the next step is to download the AI starter kit from GitHub. This repository contains pre-configured tools and scripts designed to simplify the setup process and get you up and running quickly. Visit the GitHub repository hosting the AI starter kit. Clone the repository to your local machine using the terminal command git clone [repository URL] . . Navigate to the cloned directory and follow the setup instructions provided in the repository's documentation. This step involves configuring your environment, setting up workflows, and integrating the necessary components. By the end of this process, your system will be equipped to run AI models and manage data locally, giving you a powerful and flexible AI solution. Run Local n8n AI Agents for Free Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on local AI agents. Key Components Installed Locally Once the setup is complete, several essential components will be installed on your machine. These tools work together to enable seamless AI automation and data processing, all within a local environment. n8n: A workflow automation platform that allows you to design and execute custom workflows tailored to your specific needs. A workflow automation platform that allows you to design and execute custom workflows tailored to your specific needs. PostgreSQL: A robust local database for securely storing workflows, credentials, and other critical data. A robust local database for securely storing workflows, credentials, and other critical data. Quadrant: A vector database optimized for document storage and advanced search capabilities, ideal for handling large datasets. A vector database optimized for document storage and advanced search capabilities, ideal for handling large datasets. Olama: A repository for running various large language models (LLMs) locally, allowing advanced natural language processing tasks. These components are hosted within Docker containers, making sure they remain isolated yet interoperable. This modular design allows you to customize your setup based on your specific goals and hardware capabilities. AI Model Options One of the most compelling features of this setup is the ability to run large language models (LLMs) locally. The AI starter kit supports several models, each optimized for different tasks, giving you the flexibility to choose the best fit for your projects. Llama: A versatile model suitable for a wide range of natural language processing tasks, including text generation and summarization. A versatile model suitable for a wide range of natural language processing tasks, including text generation and summarization. DeepSeek: An advanced model designed for search and retrieval applications, offering high accuracy and efficiency. You can select models based on your hardware capabilities and project requirements. Whether you're working on text analysis, data processing, or creative content generation, this flexibility ensures that your setup aligns with your objectives. Benefits of Running AI Locally Operating AI agents on your local machine provides numerous advantages, particularly for users who prioritize privacy, cost-efficiency, and customization. Cost-Free: There are no subscription fees or API usage costs, making this setup highly economical. There are no subscription fees or API usage costs, making this setup highly economical. Offline Functionality: Once configured, the system operates entirely offline, eliminating the need for constant internet connectivity. Once configured, the system operates entirely offline, eliminating the need for constant internet connectivity. Data Privacy: All data remains on your local machine, making sure complete control and security over sensitive information. All data remains on your local machine, making sure complete control and security over sensitive information. Customizable Workflows: With n8n, you can design workflows tailored to your unique requirements, enhancing productivity and efficiency. This approach is particularly beneficial for individuals and organizations seeking a self-contained AI solution that doesn't depend on external services or third-party platforms. Challenges to Consider While running AI agents locally offers significant benefits, it's important to be aware of the potential challenges and plan accordingly. Hardware Requirements: Running AI models can be resource-intensive, requiring a powerful CPU, sufficient RAM, and ample storage space to function effectively. Running AI models can be resource-intensive, requiring a powerful CPU, sufficient RAM, and ample storage space to function effectively. Technical Complexity: The setup process involves using terminal commands and configuring multiple components, which may be challenging for users without technical expertise. The setup process involves using terminal commands and configuring multiple components, which may be challenging for users without technical expertise. Maintenance Responsibility: You'll need to manage updates, security patches, and general system maintenance independently. By understanding these challenges and using community resources, you can overcome potential obstacles and ensure a smooth setup process. Additional Resources To help you make the most of your local AI setup, consider exploring the following resources: Community Forums: Engage with online communities focused on n8n, Docker, and AI automation to exchange knowledge and seek advice. Engage with online communities focused on n8n, Docker, and AI automation to exchange knowledge and seek advice. Tutorials: Access detailed guides on topics such as AI automation, image generation, and prompt engineering to expand your expertise. Access detailed guides on topics such as AI automation, image generation, and prompt engineering to expand your expertise. Pre-Built Templates: Use ready-made workflows and configurations to streamline your setup and save time. These resources can provide valuable insights and support, helping you navigate the complexities of deploying AI locally and unlocking its full potential. Media Credit: Alex Followell | AI Automation Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Daily Mail
2 hours ago
- Daily Mail
The common question that will make people dislike you, according to to a body language expert
An expert in body language and people skills has revealed one question you shouldn't ask in social situations - as it is likely to make people dislike you. Vanessa Van Edwards is the founder of Science of People, an organisation which 'gives people science-backed skills to improve communication and leadership'. She is also the author of Captivate: The Science of Succeeding with People and Cues: Master the Secret Language of Charismatic Communication. She recently appeared on an episode of Steven Bartlett 's podcast Diary of a CEO, where she spoke about a range of topic - including why you shouldn't fake smile, how to be more charismatic, and the question you shouldn't ask people in social situations. While discussing which questions you should ask people if you want to 'level up' your connection with them, Vanessa also highlighted the one query which she believes is a conversational no-no. She said: 'Stop asking "what do you do?" [...]. That is telling them their brain can stay on autopilot. Asking someone that question is really asking "what are you worth?". 'And if someone's not defined by what they do, it's actually a rude question.' Moving on to what to ask in place of that question, she said: 'You can replace it with "working on anything exciting these days?" or "working on anything exciting recently?" Vanessa explained: 'This is permission connection. You ask someone that question, you are giving them permission if they want to tell you about what they do. "If they are not defined by what they do, they'll tell you something better. 'And that also gives you really good nuggets for the next time you see them, when you can say, "hey, how was that thing you were working on?".' Moving onto how people can follow up that question, she suggested asking 'what's your biggest goal right now?'. She continued: 'When you ask this question, you're gonna get one of two responses. One, someone shuts you down [...] or, they're going to tell you about goals. 'That's also a great thing you can follow up on, because then when you see them a month later, or a week later, or a year later, you can be like, "hey, how did that go?".' Discussing how you can get to know someone better, Vanessa suggested another question you can ask them. She said: '[The question] sounds innocuous, but it's not. 'It's "what book, movie or TV character is most like you and why?". It's kind of a silly, dinner party question that sounds casual, but the answer to this question is so incredibly important. 'How someone relates to characters, their values or personality is how they see themselves, and people's answers will shock you.' Vanessa then gave an example, explaining: 'I was friends with someone for six years. [She was] one of my closest friends, I saw them all the time. 'We went on a weekend trip together [...] and I asked her this question. I hypothesised that she's a mom of three, super funny, super savvy. I was like, "she's going to pick a great TV mom character that's super savvy and funny. 'I asked her, she thinks about it for maybe one second, and says "Katniss from The Hunger Games". 'I was like "the one who's fighting for her life?". 'She replied "yes, that's how I feel every day. And we, for the first time in six years, had a conversation about how she feels about her day that was totally different than anything I had ever known - that she feels scared and lonely, and that she's fighting for survival. 'And it was the first time that I truly connected with her. 'This question [has changed my relationship with so many people] based on how they see themselves, not how I see them, but how they see themselves.'