Latest news with #U.S.FederalTradeCommission


Time of India
09-07-2025
- Business
- Time of India
US 'click to cancel' rule blocked by appeals court
A U.S. appeals court blocked a rule that would have required businesses to make it as easy to cancel subscriptions and memberships as it is to sign up, saying the agency that created it did not follow protocol. The U.S. Federal Trade Commission, which passed the rule under former Democratic Chair Lina Khan , failed to conduct a preliminary analysis of the costs and benefits of the rule, said the 8th U.S. Circuit Court of Appeals in St. Louis. The rule was set to take effect on July 14. A spokesperson for the FTC declined to comment on Tuesday. The rule would have required retailers, gyms and other businesses to provide cancellation methods for subscriptions, auto-renewals and free trials that convert to paid memberships that are "at least as easy to use" as the sign up process. It also aimed to keep companies from making consumers who signed up through an app or a website go through a chatbot or agent to cancel. The U.S. Chamber of Commerce and a trade group representing major cable and internet providers such as Charter Communications, Comcast, and Cox Communications, and media companies like Disney Entertainment and Warner Bros. Discovery are among those suing to block the rule.


Time of India
02-07-2025
- Business
- Time of India
FTC seeks more information about SoftBank's Ampere deal: Report
The U.S. Federal Trade Commission is seeking more details about SoftBank Group Corp's planned $6.5 billion purchase of semiconductor designer Ampere Computing , Bloomberg News reported on Tuesday. The inquiry, known formally as a second request for information, suggests the acquisition may undergo an extended government review, the report said. SoftBank announced the purchase of the startup in March, part of its efforts to ramp up its investments in artificial intelligence infrastructure . The report did not state the reasoning for the FTC request. SoftBank, Ampere and the FTC did not immediately respond to a request for comment. SoftBank is an active investor in U.S. tech. It is leading the financing for the $500 billion Stargate data centre project and has agreed to invest $32 billion in ChatGPT-maker OpenAI .

The Hindu
02-07-2025
- Business
- The Hindu
FTC seeks more information about SoftBank's Ampere deal: Report
The U.S. Federal Trade Commission is seeking more details about SoftBank Group Corp's planned $6.5 billion purchase of semiconductor designer Ampere Computing, Bloomberg reported on Tuesday. The inquiry, known formally as a second request for information, suggests the acquisition may undergo an extended government review, the report said. SoftBank announced the purchase of the startup in March, part of its efforts to ramp up its investments in artificial intelligence infrastructure. The report did not state the reasoning for the FTC request. SoftBank, Ampere and the FTC did not immediately respond to a request for comment. SoftBank is an active investor in U.S. tech. It is leading the financing for the $500 billion Stargate data centre project and has agreed to invest $32 billion in ChatGPT-maker OpenAI.


Business Journals
01-07-2025
- Business
- Business Journals
Top 5 ways to mitigate liability risks when AI is used improperly or goes wrong
As integrating artificial intelligence into corporate operations accelerates, the rapid deployment of AI tools—often driven by executive and investor pressure—has outpaced the establishment of robust governance, compliance, and cybersecurity frameworks. Yet, government agencies are increasingly scrutinizing AI technologies and conducting investigations, often using existing customer protection laws designed to target unfair, fraudulent, and deceptive practices. For instance, the U.S. Federal Trade Commission has sanctioned several companies for engaging in misleading marketing campaigns that inaccurately describe the capabilities of their AI tools, including the 'world's first robot lawyer' that could not provide legal advice and an AI content detector that 'did no better than a coin toss.' Numerous class action lawsuits have been filed challenging the use of AI predictive algorithms by health insurance companies on the grounds that they have high error rates, systematically denying coverage without input from a medical professional. Similarly, a class action has been brought against Workday's AI-powered applicant screening platform alleging that it discriminated based on age. Most troubling, AI technologies have rapidly expanded the cyberthreat landscape enabling criminals to commit fraud on a massive scale from phishing email campaigns and deepfake schemes to the development of new variants of malware, and even, poisoning and sabotaging AI systems. It is therefore incumbent upon business leaders and in-house counsel to ensure that the deployment of AI technologies effectively manages operational risks, mitigates the likelihood of civil and criminal liability and government enforcement actions, and incorporates appropriate oversight mechanisms to safeguard an organization's infrastructure and reputation. Here are five ways to mitigate your organization's risks. Incorporate Cybersecurity Standards: As companies race to deploy AI tools, often due to pressure from the C-suite, investors, and shareholders, they need to ensure that they do not cut corners that create cybersecurity risks and increase the likelihood of a data breach. At a minimum, AI systems should incorporate the same level of cybersecurity standards (e.g., access controls, encryption data requirements, intrusion detection and monitoring, etc.) as any other tool in an organization's network because they expand the potential entry points for malicious actors and increase vulnerability risks. Therefore, organizations must continuously monitor their AI applications and infrastructure to detect any irregularities and potential security breaches such as data poisoning, data manipulation, leakage of personal or confidential information, and misuse. Adopt AI Governance Controls: Block users on your network from using risky generative AI (GenAI) tools such as DeepSeek's R1 AI model, released in January 2025, which contained extensive security flaws and critical vulnerabilities. When China's DeepSeek shocked the world with its announcement in late January 2025 that it had developed a comparable model to ChatGPT, millions of people around the world rushed to download this app and experiment with it even though its privacy policy indicated that user data would be stored on servers located in China raising significant privacy and security concerns. Such activity could cause serious harm to your organization from the leakage of confidential and proprietary data. Organizations should clearly set out rules and boundaries in their AI Acceptable Use policy on what specific types of AI tools are permitted and the tools that are prohibited. The policy should also ensure employees know that any use of AI tools must comply with all applicable laws and regulations and any violations of this policy will result in disciplinary action. Organizations should further monitor user behavior with regard to how AI tools are being used and what information is being input into any publicly available AI tool. Reduce the Likelihood of Government Enforcement Actions and False Claims Act (FCA) Liability: Government agencies have begun closely scrutinizing the use of AI tools and cracking down on misleading, deceptive, or unfair trade practices in connection with AI technology. Numerous enforcement actions have been brought by the U.S. Federal Trade Commission and Securities and Exchange Commission against companies for issuing false and misleading statements about the capabilities of their AI systems, a practice that is referred to as 'AI-Washing.' The Department of Justice (DOJ) is likely to target AI-powered healthcare billing and coding systems in its push to prosecute health care fraud, which it recently announced was the top white-collar fraud priority under Attorney General Bondi. Errors in automated coding and claim submissions to the government can result in liability under the FCA leading to treble damages. Similarly, predictive diagnostic AI tools may influence medical practitioners resulting in upcoding and overbilling practices. To reduce liability risk, organizations should ensure that they can demonstrate to government agencies that they acted responsibly in deploying and overseeing AI systems; established governance controls to ensure the technology is only used for its intended purpose and works reliably, ethically, and in compliance with applicable law; conducted robust risk assessments; regularly audited and monitored AI tools; and promptly investigated, corrected, and remediated any identified discrepancies or errors. Indeed, DOJ's September 2024 update to its guidance on the 'Evaluation of Corporation Compliance Programs' emphasizes the importance of assessing and minimizing evolving risks, including the potential for misuse by company insiders, when using AI tools within an organization's enterprise risk management strategy. Take Steps to Address the Rise in Lawsuits Involving AI Tools: AI tools can go wrong, make mistakes, and cause harm. Since the launch of GenAI, there has been a steady increase in the number of lawsuits being filed involving the misuse of AI tools. As noted above, class action lawsuits have been brought against health insurance companies for wrongful denials of coverage based upon AI predictive tools. Failure to implement guardrails in an AI system and monitor AI outputs can prove catastrophic and provide the basis for a product liability claim. On May 21, 2025, U.S. District Judge Anne C. Conway for the Middle District of Florida denied a motion to dismiss and allowed a lawsuit accusing Google and of causing a 14 year old's suicide after he became addicted to an AI chatbot to move forward, finding 'the alleged design defects' actionable. On June 4, 2025, Reddit sued AI startup Anthropic in California State Court for unlawfully using its data for commercial purposes without paying for it and in violation of Reddit's user data policy. It is only a matter of time before we see legal malpractice claims against lawyers for filing pleadings with hallucinated legal citations. Once a problem with an AI tool is detected, steps should promptly be taken to investigate the issue, preserve the evidence, consider making a voluntary self-disclosure and make any required disclosures to state and federal agencies, and fully remediate the situation. Get Ready for the Challenges of Agentic AI: AI agents powered by large language models are not only generating new content in response to prompts, but autonomously making and executing decisions. Agentic AI has the potential to transform business operations. AI agents, however, could also increase liability risks while making organizations more susceptible to cyberattacks. AI agents are authenticated users on a network that operate using corporate credentials and rapidly execute decisions. They can be tricked and manipulated by a prompt injection or adversarial action. It is therefore crucial to adopt clear policies, safeguards, oversight frameworks, and auditing procedures, and conduct AI red teaming exercises. Discover how Hinckley Allen's cross-disciplinary Artificial Intelligence Group helps our clients navigate emerging AI technologies' legal, regulatory, and business challenges. From risk management to strategic deployment, our attorneys provide tailored counsel to help you innovate with confidence. Learn how our insights can support your business's AI journey. Hinckley Allen is a full-service business law firm dedicated to delivering exceptional results for its clients. With more than 170 attorneys across offices in Connecticut, Florida, Illinois, Massachusetts, New Hampshire, New York, and Rhode Island, the firm represents leading regional, national, and global businesses in their most critical legal and business matters. Since 1906, Hinckley Allen has played a vital role in shaping the landscape of law, business, government, and community engagement. Learn more at B. Stephanie Siegmann is a litigation partner at Hinckley Allen. She specializes in handling high-stakes criminal and civil litigation matters, sensitive internal investigations, government enforcement proceedings, and cyber-related incidents of all kinds. Stephanie serves as Chair of the International Trade & National Security group, and Co-Chair of the Cybersecurity, Privacy & Data Protection, and Artificial Intelligence practice groups.

Miami Herald
01-07-2025
- Business
- Miami Herald
California's new consumer protection laws go into effect July 1
New California state laws going into effect on Tuesday will protect tech customers from shady auto-renewal subscriptions, the sale of stolen goods via online marketplaces and self-cleaning requirements for guests at short-term rentals like Airbnb and Vrbo. Lawmakers also tweaked one of Gov. Gavin Newsom's most prized mental health projects, to keep loved ones notified when their mentally-ill kin are traveling through the court system. And some cities in the Bay Area will see minimum wage increases. Auto-renew protections Consumer advocates have long argued that companies take advantage of consumers with subscriptions that automatically renew - a $1.5 trillion industry, according to state lawmakers. In 2023, the U.S. Federal Trade Commission accused Amazon of automatically enrolling millions of customers in Amazon Prime - a paid subscription - and then making it hard to cancel. Then, this spring, the agency took rideshare giant Uber to court over what it said were "unfair and deceptive practices" about its auto-renew subscription service. AB 2863, sponsored by Los Angeles area Assemblymember Pilar Schiavo, a Democrat, requires companies to get explicit approval from customers to auto-renew their subscription. Companies must send customers an annual reminder of their subscription and instructions on how to cancel, and they'll have to make it easier for customers to cancel. "As it stands currently, many subscriptions are almost impossible to cancel without undertaking a Kafkaesque process that frustrates consumers to no end, and does so to the direct financial benefit of corporations," the Consumer Federation of California wrote in a bill analysis last fall. The federation and district attorneys supported the bill. It was opposed by the California Chamber of Commerce and the California Retailers Association.. Vacation rental cleaning fees Guests at short-term rentals hosted by Airbnb, Vrbo and the like will also enjoy added protections on July 1. Existing law, as of July 2024, required those companies to alert customers about all fees and tacked-on charges before they book their stay, or face a fine of up to $10,000. On July 1, it'll also be illegal for hosts to charge guests for failing to perform cleaning duties without advance notice. Hosts must disclose all fees up-front in advertisements - not just on their profiles. Those additions are part of AB 2202, sponsored by former Assembly Speaker Anthony Rendon, a Democrat from Los Angeles. Airbnb, Expedia and the Travel Technology Association opposed the bill, while consumer groups supported it. Hot items in online marketplaces Also related to online marketplaces, SB 1144 is another attempt to crack down on the sale of stolen goods online. The law forces online sites like Facebook Marketplace to write policies banning the sale of stolen goods on that platform and to notify law enforcement when that happens. The law already required high-volume online sellers to submit their names, bank account information, phone numbers and email addresses to online marketplace platforms. This new law was spearheaded by former East Bay state Sen. Nancy Skinner, a Democrat. Cities and district attorneys supported the bill, while the Chamber of Progress, a tech trade group, opposed it. Tracking state mental health treatment Lawmakers also made tweaks to one of Newsom's prized mental health initiatives. The CARE Act, passed in 2022, set up new mental health courts aimed at getting those with serious psychiatric disorders into treatment and housing - not incarceration, or a life on the street. CARE courts work by empowering family members, close friends, first responders, behavioral health providers and others to refer people with severe, untreated psychiatric issues to the program. If someone is eligible, a judge helps to facilitate a treatment plan, which may include medication, drug counseling and a bed in supportive housing or a residential care facility. In Silicon Valley and the East Bay started their CARE Court programs last year. SB 42, which goes into effect Tuesday, gives a mentally ill person's friends and family the right to be updated about the court's work, as well as others who referred them to the court. It is also intended to streamline the legal process by reducing the court's obligation to inform the patient of their rights. Families Advocating for the Seriously Mentally Ill and California Professional Firefighters advocated for the bill. The ACLU's action wing opposed it, as did Disability Rights California and Mental Health America of California. Those groups contended the law violates patients' privacy. New wage rates kick in In the Bay Area, workers will also get a pay boost when laws already approved in past years in these cities raise their minimum wage on July 1: -City of Alameda, from $17/hour to $17.46/hour -Berkeley, from $18.67/hour to $19.18/hour -Emeryville, from $19.36/hour to $19.90/hour -Fremont, from $17.30/hour to $17.75/hour -Milpitas, from $17.70/hour to $18.20/hour Copyright (C) 2025, Tribune Content Agency, LLC. Portions copyrighted by the respective providers.