
Hackers are using a modified Salesforce app to trick employees and extort companies, Google says
The hackers – tracked by the Google Threat Intelligence Group as UNC6040 – have 'proven particularly effective at tricking employees' into installing a modified version of Salesforce's Data Loader, a proprietary tool used to bulk import data into Salesforce environments, the researchers said.
The hackers use voice calls to trick employees into visiting a purported Salesforce connected app setup page to approve the unauthorized, modified version of the app, created by the hackers to emulate Data Loader.
If the employee installs the app, the hackers gain 'significant capabilities to access, query, and exfiltrate sensitive information directly from the compromised Salesforce customer environments,' the researchers said.
The access also frequently gives the hackers the ability to move throughout a customer's network, enabling attacks on other cloud services and internal corporate networks.
Technical infrastructure tied to the campaign shares characteristics with suspected ties to the broader and loosely organized ecosystem known as 'The Com,' known for small, disparate groups engaging in cybercriminal and sometimes violent activity, the researchers said.
A Google spokesperson told Reuters that roughly 20 organizations have been affected by the UNC6040 campaign, which has been observed over the past several months. A subset of those organizations had data successfully exfiltrated, the spokesperson said.
A Salesforce spokesperson told Reuters in an email that 'there's no indication the issue described stems from any vulnerability inherent in our platform.' The spokesperson said the voice calls used to trick employees 'are targeted social engineering scams designed to exploit gaps in individual users' cybersecurity awareness and best practices.'
The spokesperson declined to share the specific number of affected customers, but said that Salesforce was 'aware of only a small subset of affected customers,' and said it was 'not a widespread issue.'
Salesforce warned customers of voice phishing, or 'vishing,' attacks and of hackers abusing malicious, modified versions of Data Loader in a March 2025 blog post.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
20 minutes ago
- Yahoo
White House Wants Bias-Free AI for Government Work
The White House is cooking up an order to make sure AI tools that work with the government stay politically neutral. Officials worry models trained on internet data can drift into liberal or conservative slants, so this would set a clear standard. Warning! GuruFocus has detected 3 Warning Sign with UAL. At the center of the plan is AI czar David Sacks, who has pointed to embarrassing moments like Google's Gemini painting a black George Washington or diverse Nazis. OpenAI, Anthropic, Google (NASDAQ:GOOG) and Elon Musk's xAI fear it could pick industry winners and spark free speech fights. This order lands just as the Pentagon is handing out nearly two hundred million dollars in AI contracts. Tying neutrality to federal deals could shift who wins big and shape how the industry builds its next generation of tools. It also comes bundled with moves to boost chip exports and speed data center approvals. As Washington juggles bias concerns and tech rivalry with China, investors will be watching every step. This article first appeared on GuruFocus. Sign in to access your portfolio

Miami Herald
an hour ago
- Miami Herald
14 interesting facts about self-driving cars
14 interesting facts about self-driving cars While they may seem like something out of a sci-fi movie, driverless cars are quickly becoming part of the new normal. Here, Salvi, Schostok & Pritchard P.C. shares 14 interesting facts about self-driving cars. 1. The Idea of Driverless Cars Dates Back to the 1930s The original idea of driverless cars was introduced by General Motors in a 1939 exhibit and made a reality in 1958. 2. Waymo Was a Secret By now, most people have heard of Waymo, Google's launch into the self-driving car industry. Waymo vehicles have currently logged the most miles out of all the various autonomous vehicle models. However, Waymo began as a secret. The project was started by Sebastian Thrun, who led his robotics team at Stanford University to win the DARPA Grand Challenge. In 2012, a few years after beginning the project, Google revealed that its fleet of autonomous vehicles had driven over 300,000 miles. By 2020, the vehicles had logged more than 20 million miles on public roads. 3. Driverless Car Stats Are Showing Only More Growth More than 1,400 driverless vehicles are currently in 3 billion miles have been logged using Tesla's autopilot drove over 20 million miles across 25 U.S. logged more than 450,000 miles in California in one and Aptiv deployed more than 100,000 commercial Robo-taxis in Las global self-driving vehicles industry is currently expanding by 16% of self-driving vehicles are projected to be 58 million units by global autonomous vehicle market is expected to reach nearly $450 million by 2035. 4. There Are 6 Levels of Automation According to the Society of Automotive Engineers, there are six levels of driving automation that range from level 0 with no automation to level 5, a fully automated, driverless car. Here is more information about each level of automation. LEVEL 0: No automation The human driver is responsible for all driving actions. LEVEL 1: Driver assistance An Advanced Driver Assistance System (ADAS) may assist the driver with basic driving functions, such as accelerating, braking, or steering. Adaptive cruise control is an example of Level 1 automation. This feature controls acceleration and braking, usually in highway driving. Drivers may be able to remove their foot from the pedal using Level 1 automation. LEVEL 2: Partial automation Partial automation allows the vehicle to perform more complex functions and simultaneous actions, such as controlling steering, accelerating, and braking. However, the human driver must continue to pay attention. Level 2 automation includes highway or traffic jam assistance. LEVEL 3: Conditional automation This is the first level where the human driver may be able to disengage from driving in certain situations, such as in good weather and traffic conditions. However, the driver must be able to reengage when the system requests it. The vehicle can stop in case of a failure. LEVEL 4: High automation The vehicle's autonomous driving system can monitor the driving environment and operate all driving functions for routine routes and conditions. The vehicle can alert the driver if there is something beyond its limits that requires the driver to take over. If the human driver does not respond, the vehicle will secure itself. LEVEL 5: Full automation No human driver is necessary at this level of automation because the vehicle is fully autonomous and can operate under any conditions. Humans may not be in the vehicle. Vehicles at this level of automation may not have a steering wheel or gas or brake pedals. 5. Autonomous Vehicles Rely on Many Sensors Autonomous vehicles are typically equipped with many different sensors. Sensors monitor various data points and feed this information into the vehicle's computer. There may be several sensors in the vehicle, including: Autonomous vehicles use sensors and other technologies to create a map of their surroundings. As they receive more information, they fill in more information on the map. They process the sensory input and plot a path based on it. The vehicles are also equipped with complex algorithms, predictive machine learning systems, and object recognition tools to help the software follow traffic rules and avoid collisions. 6. The NHTSA Has Identified Five Major Benefits of Autonomous Cars The National Highway Traffic Safety Administration has identified five major advantages of autonomous vehicles. Safety Most serious car accidents are caused by human error, so the NHTSA believes that autonomous vehicles could be safer by removing human drivers and eliminating their potential mistakes. Economic and societal benefits Motor vehicle collisions cost more than $250 billion in economic activity each year. Car accidents cause injuries, property damage, lost workplace productivity, deaths, and decreased quality of life. Eliminating the major factor that causes these accidents (the human element) could save many of these costs. Efficiency and convenience Roads could be more easily maintained and regulated with autonomous vehicles. Drivers spent nearly 4 billion hours stuck in traffic delays in 2024 Mobility Driverless vehicles could help people with disabilities that prevent them from driving. Increased mobility could provide new pathways to employment and a higher quality of life. Environmental Driverless vehicles are usually electric, which can improve efficiency and spur less personal driving, resulting in significant environmental benefits. 7. Autonomous Cars Still Face Safety Challenges While these potential benefits are exciting, autonomous vehicles still face challenges that pose a threat to safety. Lidar and radar sensor limitations Most serious car accidents are caused by human error, so the NHTSA believes that autonomous vehicles could be safer by removing human drivers and eliminating their potential mistakes. Traffic Autonomous vehicles may struggle in highly congested areas. Driverless cars are more likely to be involved in rear-end collisions, so they may have difficulty maintaining a safe distance in bumper-to-bumper traffic. Security breaches Driverless vehicles may be more susceptible to manipulation due to hacking. Reliance on artificial intelligence Drivers may become overly reliant on the technology, putting themselves in situations where they cannot quickly take control of the vehicle. Software may malfunction, leading to other accidents. 8. Driverless Cars Might Be Safer Now A study by the University of Michigan Transportation Research Institute, General Motors and Cruise published in late 2023 found that human rideshare drivers were more likely to be responsible for traffic crashes in San Francisco after reviewing data from 5.6 million miles of human rideshare drivers compared to 1 million miles of driverless rideshares. The researchers found that autonomous vehicles had 65% fewer collisions than humans. However, this technology is still being developed, so there may not be sufficient information to determine which set of drivers is really safer. 9. The First Pedestrian Death Involved an Uber Driver In 2018, a woman in Tempe, Arizona, was behind the wheel of a self-driving Uber vehicle that hit and killed a pedestrian. She was charged with negligent homicide and was believed to have been distracted by her phone at the time of the incident. The accident led to Arizona temporarily suspending driverless operations in the state. Uber's self-driving cars were involved in 37 minor accidents before the pedestrian death. 10. Driverless Cars Are More Likely to Be Involved in Rear-End Accidents A study published in a 2020 issue of the journal Transportation Research Procedia shows that out of the motor vehicle collisions studied in California between 2015 and 2017, 64.2% of those involving autonomous vehicles were rear-end accidents, compared to 28.3% of accidents involving conventional vehicles. 11. Most Autonomous Vehicle Accidents Occurred When the Vehicle Proceeds Straight The same study revealed that 65.2% of accidents involving autonomous vehicles occurred when the vehicle proceeded straight when it should not have. Other common driving maneuvers involving autonomous vehicles and how often they contributed to accidents include: 13% making a left turn8.7% making a right turn8.7% passing another vehicle2.2% changing lanes2.2% entering traffic 12. Driving at an Unsafe Speed Is the No. 1 Cause of Driverless Car Accidents The researchers also found the following driver errors in crashes involving autonomous vehicles: 13. Drivers Have Died While in 'Autopilot' To date, Tesla's Deaths database suggests 58 people have died in accidents involving the vehicle's autopilot program as of May 2025. 14. Driverless Cars Were Involved in Nearly 400 Crashes in 11 Months In June 2022, automakers reported nearly 400 crashes that involved vehicles with partially automated driver assistance systems over an 11-month period in 2021, according to the NHTSA. This included 273 crashes involving Tesla vehicles. This story was produced by Salvi, Schostok & Pritchard P.C. and reviewed and distributed by Stacker. © Stacker Media, LLC.


Android Authority
an hour ago
- Android Authority
Google targets cybercriminals behind massive Android malware scheme
Mishaal Rahman / Android Authority TL;DR Google is suing the creators of BadBox 2.0, a botnet that infected 10 million off-brand Android devices. The malware often came pre-installed on cheap streaming boxes, tablets, and projectors, mostly made in China. Infected devices were used for ad fraud and to hide other cybercriminals' activity behind your home network. Before they even turned it on, the device was already infected. That's the reality for millions who unknowingly bought Android-powered devices hijacked by BadBox 2.0, a massive botnet that Google is now trying to shut down in court. As detailed in a blog post, Google is filing a new lawsuit in New York against the group behind the operation. It says BadBox 2.0 is the largest known botnet targeting internet-connected TVs and other Android-based gadgets. According to the company, more than ten million devices were compromised. These weren't high-end Android TVs or certified tablets. Think of off-brand streaming boxes, digital projectors, and low-cost tablets, mostly running Android Open Source Project, which lacks Google's built-in security protections. Many were sold under unfamiliar brand names, and in many cases, the malware was already baked in when buyers took them out of the box. Robert Triggs / Android Authority Once powered on and connected to the internet, the devices became part of a hidden network controlled by cybercriminals. Some were used to commit large-scale ad fraud, simulating fake ad clicks to steal money from advertisers. Others were sold off as part of 'residential proxy' services, allowing shady actors to route their traffic through real users' home networks and effectively hiding their tracks behind the unsuspecting user's IP address. The botnet was uncovered through a joint investigation by Google, HUMAN Security, and Trend Micro. Google says its Ad Traffic Quality team spotted the activity early, blocking bad traffic and shutting down thousands of accounts trying to profit from the scheme. On your end, Google Play Protect now flags and blocks apps with BadBox behavior, even if they're sideloaded from outside the Play Store. The FBI has also issued a public warning, urging people to check their connected devices for signs of tampering or strange behavior, especially if the hardware came from an unknown brand or required you to disable Google Play Protect during setup. The agency says most of the compromised gadgets were manufactured in China and sold with malware pre-installed, or infected shortly after setup via malicious apps from unofficial app stores. By taking the case to court, Google hopes to target the people behind the scheme. While the company's protections contained the damage, it's another reminder that the real cost of a budget streaming box might not be just what you pay at checkout. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.