logo
Bessent asks lawmakers to megabill's 'revenge tax,' citing progress in global talks

Bessent asks lawmakers to megabill's 'revenge tax,' citing progress in global talks

Politico3 days ago

The Senate parliamentarian is asking the Senate Commerce Committee to rework its 10-year moratorium on enforcing state artificial intelligence laws, according to ranking member Maria Cantwell.
The parliamentarian had asked Commerce Chair Ted Cruz (R-Texas) to rewrite the language in the GOP megabill to make clear it wouldn't impact $42 billion in broadband funding, Cantwell (D-Wash.) told POLITICO.
'That's what was a last night request from the parliamentarian,' Cantwell said. 'Yeah, that's what's going on.'
Cruz's communications director Macarena Martinez said in a statement to POLITICO Thursday, 'Out of respect, we are not going to comment on private consultations with the Parliamentarian,' and added, 'The Democrats would be wise not to use this process to wishcast in public.'
What's the problem? At issue is the scope of funding that will be conditioned on states complying with a 10-year pause on enforcing their AI laws.
Cruz has said enforcing the moratorium would be required for states to tap into a new $500 million fund for building out AI infrastructure.
The parliamentarian approved that language, a narrowed version of an earlier proposal to tie the moratorium to the $42 billion Broadband Equity, Access, and Deployment program.
Democrats have argued that the latest moratorium would still affect all $42 billion.
Talking points circulated by Cruz on Wednesday saying his bill 'forbids states collecting new BEAD money from strangling AI deployment with EU-style regulation' only added to the confusion, suggesting the provision could apply to the entire broadband program.
Cruz's office told POLITICO Wednesday that the Congressional Budget Office 'has confirmed this applies only to the unobligated $500M.'
The Senate parliamentarian is under fire after striking major pieces of Medicaid policy from being included in the megabill on Thursday. Majority Leader John Thune has said the GOP would not seek to override decisions from the Senate's rules referee.
Republican doubts: The AI moratorium has divided Republicans. A group of GOP senators, including Sens. Marsha Blackburn of Tennessee, Josh Hawley of Missouri and others, sent a letter to Thune on Wednesday urging the removal of the moratorium language, according to a person familiar with the matter.
'States should not be punished for trying to protect their citizens from the harms of AI,' Blackburn said in a post on X on Thursday.
Sen. Kevin Cramer (R-N.D.) said he is concerned about the scope of the provision and needs to 'get clarity' on if it would apply to the whole BEAD program.
'There's some communication challenge here about whether we're talking about a $500 million pot, or whether we're talking about the entire $40 billion — and the difference is significant. It matters,' Cramer told POLITICO. 'If I can't get assurances that it's not just the smaller pot, it'd be hard for me to get to yes.'
The Article 3 Project, a prominent conservative advocacy group, said it would 'fully support these bold and fearless Republican Senators and their effort to protect America's children, creators, and foundational property rights.'
Arkansas Gov. Sarah Huckabee Sanders, who served as White House press secretary in the first Trump administration, came out against the moratorium language in The Washington Post on Thursday.
She warned it would lead to 'unintended consequences and threatens to undo all the great work states have done to protect our citizens from the misuse of artificial intelligence.'
Tech support: The tech industry has lent broad support to the moratorium. The National Venture Capital Association praised it in a letter to Thune on Thursday.
'The current fragmented AI regulatory environment in the United States creates unnecessary challenges for startups, stifles innovation, and threatens our dominance in the industry,' wrote Bobby Franklin, the organization's president.
Other major tech groups, including the Business Software Alliance, the Consumer Technology Association and NetChoice, have also strongly supported the language.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Future of AI-Powered Treatment Discovery
The Future of AI-Powered Treatment Discovery

Time Business News

time30 minutes ago

  • Time Business News

The Future of AI-Powered Treatment Discovery

The future of treatment discovery is changing fast with the help of artificial intelligence (AI). As technology improves, AI is becoming a powerful tool in the healthcare world, especially for finding new and better ways to treat diseases. With rising health challenges and complex conditions, AI has the potential to completely change how new medicines are developed. This article explains how AI is shaping the future of treatment discovery, the role of data science, and how people can prepare for these changes through a data science course in Hyderabad. AI is now playing a very important role in many industries, including healthcare. In the past, finding new treatments was a long, expensive, and often uncertain process. But with AI, this can become much faster and more accurate. Machine learning and deep learning tools can process huge amounts of information quickly, spotting patterns and connections that humans might miss. This ability is especially useful in discovering new therapies where a lot of biological and chemical data needs to be analyzed. AI is already making a big difference in the early stages of finding new treatments. Earlier, researchers often depended on trial and error to find chemical compounds that could help treat diseases. Now, AI allows this process to become more targeted and based on data. Machine learning models can predict how effective a compound might be against a specific disease. This helps save time and money compared to traditional methods. AI tools can also suggest possible side effects and point out which natural or lab-based compounds are most likely to work. This helps scientists focus only on the most promising options, improving the chances of success. Data science plays a key role in helping AI deliver useful results in treatment discovery. There's a massive amount of data involved — from clinical trials to genetic details — and managing it requires special skills. A data science course can teach individuals how to work with this type of information. These programs cover tools like machine learning and statistical analysis, which are critical for turning raw data into meaningful insights. One of the most exciting uses of AI is in personalized or precision medicine. This means creating treatments based on each person's unique genetic background, lifestyle, and health conditions. AI can study genetic data and predict which therapies are likely to work best for specific patients. This helps move away from the old one-size-fits-all method and brings in more customized care that works better and has fewer side effects. For AI to succeed in this area, skilled data scientists must be able to manage and understand large sets of health data, including medical history, clinical reports, and genetic information. One of the biggest advantages of AI is speed. Normally, it takes many years — sometimes decades — to bring a new treatment to market. It's a long and costly journey, and success is never guaranteed. AI can cut this timeline down dramatically. With its ability to quickly analyze large datasets, AI can find promising compounds in weeks or months. This is especially useful for finding cures for diseases that spread fast or don't yet have effective treatment options. Even though AI has great potential, there are challenges that need attention. One major issue is the availability and quality of data. AI systems need reliable, organized data to give correct predictions. Unfortunately, healthcare data is often scattered, incomplete, or unstructured, which makes things difficult for AI tools. Another challenge is the lack of skilled professionals. Working with AI in medicine needs people who understand machine learning, biology, and data science. That's why specialized training programs, like data science courses in Hyderabad, are becoming more important. As AI continues to change how treatments are discovered, the role of data scientists will become even more important. These professionals will design and improve the AI systems that lead to better medical solutions. They will also make sure that the data being used is accurate and helpful. To do this job well, data scientists need a strong understanding of both computer science and biology. They'll need to work closely with doctors, researchers, and scientists to turn medical questions into data-based answers. With this teamwork, they can help develop new medicines that could change lives. AI in treatment discovery is not limited to any one country. Around the world, AI is being used to solve health problems — even in places where access to traditional healthcare is limited. By making the development process faster and more efficient, AI can bring new treatments to markets that were often ignored. It's also helping researchers work on cures for major global diseases like cancer, Alzheimer's, and various infections. By studying worldwide health data, AI can uncover new solutions that might otherwise go unnoticed. As AI keeps improving, its effect on healthcare will be huge, helping millions by speeding up the creation of life-saving therapies. The future of AI in discovering and developing treatments looks very bright. AI can completely change how we create medicines, making the process faster, more affordable, and more precise. With the help of data science, researchers can find better solutions for serious health issues, giving hope to patients around the world. As technology continues to grow, we'll see even more progress in treatment discovery, leading to better care and healthier lives. The future of healthcare and AI is closely linked, and those ready to embrace it will help lead the way in medical innovation. ExcelR – Data Science, Data Analytics, and Business Analyst Course Training in HyderabadAddress: Cyber Towers, PHASE-2, 5th Floor, Quadrant-2, HITEC City, Hyderabad, Telangana 500081 Phone: 096321 56744

How Claude AI Clawed Through Millions Of Books
How Claude AI Clawed Through Millions Of Books

Forbes

time42 minutes ago

  • Forbes

How Claude AI Clawed Through Millions Of Books

The race to build the most advanced artificial intelligence generative AI technology has continued to be a story about data: who possesses it, who seeks it, and what methods they use for its acquisition. A recent federal court ruling involving Anthropic, creator of the AI assistant Claude, offered a revealing look into these methods. The company received a partial victory alongside a potentially massive liability in a landmark copyright case. The legal high-five and hand slap draw an instructive, if blurry, line in the sand for the entire AI industry. This verdict is complex, likely impacting how AI large language models (LLMs) will be developed and deployed going forward. The decision seems to be more than a legal footnote, but rather a signal that fundamentally reframes risk for any company developing or even purchasing AI solutions. 3d rendering humanoid robot reading a book in library My Fair Library First, the good news for Anthropic and its ilk. U.S. District Judge William Alsup ruled that the company's practice of buying physical books, scanning them, and using the text to train its AI was "spectacularly transformative." In the court's view, this activity falls under the doctrine of "fair use." Anthropic was not simply making digital copies to sell. In his ruling, Judge Alsup wrote that the models were not trained to 'replicate or supplant' the books, but rather to 'turn a hard corner and create something different.' The literary ingestion process itself was strikingly industrial. Anthropic hired former Google Books executive Tom Turvey, to lead the acquisition and scanning of millions of books. The company purchased used books, stripped their bindings, cut their pages, and fed them into scanners before tossing the paper originals. Because the company legally acquired the books and the judge saw the AI's learning process as transformative, the method held up in court. An Anthropic spokesperson told CBS News it was pleased the court recognized its training was transformative and 'consistent with copyright's purpose in enabling creativity and fostering scientific progress.' For data and analytics leaders, this part of the ruling offers a degree of reassurance. It provides a legal precedent suggesting that legally acquired data can be used for transformative AI training. Biblio-Take-A However, the very same ruling condemned Anthropic for its alternative sourcing method: using pirate websites. The company admitted to downloading vast datasets from "shadow libraries" that host millions of copyrighted books without permission. Judge Alsup was unequivocal on this point. 'Anthropic had no entitlement to use pirated copies for its central library,' he wrote. 'Creating a permanent, general-purpose library was not itself a fair use excusing Anthropic's piracy.' As a result, Anthropic now faces a December trial to determine the damages for this infringement. This aspect of the ruling is a stark warning for corporate leadership. However convenient, using datasets from questionable sources can lead to litigation and reputational damage. The emerging concept of 'data diligence' is no longer just a best practice, it's a critical compliance mechanism. A Tale Of Two Situs This ruling points toward a new reality for AI development. It effectively splits the world of AI training data into two distinct paths. One is the expensive, but legally defensible route of licensed content. The other is the cheap, but legally treacherous path of piracy. The decision has been met with both relief and dismay. While the tech industry now sees a path forward for AI training, creator advocates see an existential threat. The Authors Guild, in a statement to Publishers Weekly, expressed its concern. The organization said it was 'relieved that the court recognized Anthropic's massive, criminal-level, unexcused e-book piracy,' but argued that the decision on fair use 'ignores the harm caused to authors.' The Guild added that 'the analogy to human learning and reading is fundamentally flawed. When humans learn from books, they don't make digital copies of every book they read and store them forever for commercial purposes.' Judge Alsup directly addressed the argument that AI models would create unfair competition for authors. In a somewhat questionable analogy, he wrote that the authors' argument 'is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works.' The Story Continues This legal and ethical debate will likely persist, affecting the emerging data economy with a focus on data provenance, fair use, and transparent licensing. For now, the Anthropic case has turned a new page on the messy, morally complex process of teaching our silicon-based co-workers. It reveals a world of destructive scanning, digital piracy, and legal gambles. As Anthropic clawed its way through millions of books, it left the industry still scratching for solid answers about content fair use in the age of AI.

The new must-have for CEOs: An AI whisperer
The new must-have for CEOs: An AI whisperer

Business Insider

time44 minutes ago

  • Business Insider

The new must-have for CEOs: An AI whisperer

A year ago, Glenn Hopper was advising just a handful of company leaders on how to embed AI agents and tools like ChatGPT throughout their organizations. Today, the Memphis-based AI strategist has an extensive waitlist of C-suite executives seeking his help with those tasks and more. "This technology is moving so fast, the gap between what CEOs need to know and what they actually understand is massive," said Hopper, a former finance chief and author of the 2024 book "AI Mastery for Finance Professionals: Foundations, Techniques, and Applications." That's why so many bosses are knocking on his door, he told Business Insider. Leadership coaches and consultants have long helped CEOs navigate the pressures of the corner office. Now, executive sherpas who were early to embrace AI say they're seeing a spike in CEOs seeking guidance on everything from vetting vendors and establishing safety protocols to preparing for the next big wave of AI breakthroughs. "Everything is inbound," said Conor Grennan, chief of AI Mindset, an AI consulting and training firm he founded in 2023. "We have a list a mile long." Grennan, also chief AI architect at New York University's Stern School of Business, said company leaders often reach out after struggling to get employees to adopt AI. They see other CEOs touting AI's benefits, and so they're feeling like they're behind, he said. 'It was garbage in, garbage out' Some of the AI gurus that company leaders are tapping helm (or work at) startups that were launched in recent years to take advantage of the AI boom. Others head up new or expanded AI teams within established advisory firms. Lan Guan was named Accenture's first chief AI officer in 2023, two decades after joining the professional-services firm. She's since been counseling a growing number of company leaders on how to bring AI into their organizations. "CEOs need an AI translator to basically sift through all this noise," she said. "The amount of signals you're getting, the amount of noise, it's so distracting." Company leaders are also seeking out AI gurus in some cases to fix mistakes they made while going at it alone. Guan recalled one CEO who came to her after the person's company had to pause a multi-million dollar investment in a custom AI model because employees trained it on dozens of different versions of the same operating procedure. "When they tried to scale, their data was not clean enough," she said. "It was garbage in, garbage out." Amos Susskind, CEO and founder of the London beauty-tech startup Noli, has been tapping Guan and members of her team for AI-related guidance for the past year. Noli's roughly 20 employees have been using AI tools to do their jobs and the company's beauty-product recommendation platform is powered by AI. "I'm in touch with AI leaders in Accenture probably five times a day," said Susskind, who previously led L'Oreal's consumer-products division for the U.K. and Ireland. Shaping the AI narrative leaders use Last year, 78% of workers said their organizations had used AI in at least one function, up from 55% in 2023, according to a March survey by global management consulting firm McKinsey. Companies are planning to dig into AI even more this year. A May survey from professional-services firm PricewaterhouseCoopers found that 88% of senior executives planned to increase their AI-related budgets in the next 12 months. "AI is in line with, if not bigger than, the internet," said Dan Priest, chief AI officer of PwC, whose position was created last year. Public company CEOs mentioned "agentic AI" — AI systems capable of acting autonomously — and similar terms on 269 conference calls in the second quarter, up from 12 during the same period last year, according to AI research firm AlphaSense. Getting those AI mentions right is another reason why company leaders are leaning on AI sages. CEOs need to consider more than just how investors and analysts interpret their remarks, said Priest. Employees are listening, too, and workplace experts say heightened anxiety among personnel can dent productivity and drive up turnover. "You want to be careful," warned Priest, who helps CEOs communicate their AI strategy externally. "The second you start talking about AI efficiencies, it makes your teams very nervous." CEOs also need to make sure employees using AI are doing so safely, said Hopper, the AI strategist in Memphis. "If you try to have too prohibitive a policy or don't have a policy at all, that's when people are going to do stupid stuff with data," he said. While CEOs may not want to be involved in every AI process or initiative happening at their companies, Hopper said the more hands-on experience they get with the technology, the better equipped they'll be to make smart decisions about how their organizations can benefit from it. Michael White, chief of MashTank, a boutique management consulting firm near Philadelphia, became one of Hopper's clients last year. Though he considers himself tech-savvy, White said Hopper got him up to speed on AI faster than he could've on his own. "We now have a bot that knows a lot of what I know, but has a better memory than I do," said White. Without an AI whisperer like Hopper, he added, "I'd still be at the starting gate."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store