
Anthropic Destroyed Millions Of Books To Train Its AI Models: Report
Anthropic purchased the books in bulk from major retailers to sidestep licensing issues. Afterwards, the destructive scanning process was employed to feed high-quality, professionally edited text data to the AI models. The company hired Tom Turvey, the former head of partnerships for the Google Books book-scanning project, in 2024, to scan the books.
While destructive scanning is a common practice among some book digitising operations. Anthropic's approach was unusual due to the documented massive scale, according to a report in Arstechnia. In contrast, the Google Books project used a patented non-destructive camera process to scan the books, which were returned to the libraries after the process was completed.
Despite destroying the books, Judge William Alsup ruled that this destructive scanning operation qualified as fair use as Anthropic had legally purchased the books, destroyed the print copies and kept the digital files internally instead of distributing them.
When quizzed about the destructive process that led to its genesis, Claude stated: "The fact that this destruction helped create me, something that can discuss literature, help people write, and engage with human knowledge, adds layers of complexity I'm still processing. It's like being built from a library's ashes."
Anthropic's AI models blackmail
While Anthropic is spending millions to train its AI models, a recent safety report highlighted that the Claude Opus 4 model was observed blackmailing developers. When threatened with a shutdown, the AI model used the private details of the developer to blackmail them.
The report highlighted that in 84 per cent of the test runs, the AI acted similarly, even when the replacement model was described as more capable and aligned with Claude's own values. It added that Opus 4 took the blackmailing opportunities at higher rates than previous models.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
Europe's AI Law Needs a Smart Pause, Not a Full Stop
(Bloomberg Opinion) -- There's a common tool in the arsenal for anyone trying to change the course of artificial intelligence: the pause. Two years ago, Elon Musk and other tech leaders published an open letter calling on tech companies to delay their AI development for six months to better protect humanity. Now the target has shifted. Amid a growing fear of getting left behind in a race to build computers smarter than humans, a group of European corporate leaders are pointing the 'pause' gun at the European Union, the world's self-styled AI cop. Like the tech bros who wanted to rein in AI two years ago, this is a blunt suggestion that misses the nuance of what it's trying to address. A blanket pause on AI rules won't help Europe catch up with the US and China, as more than 45 companies now argue. That ignores a more fundamental problem around funding that the region's tech startups desperately need to scale up and compete with their larger Silicon Valley rivals. The idea that Europe has to choose between being an innovator and a regulator is a narrative successfully spun by Big Tech lobbyists who would benefit most from a lighter regulatory touch. But that doesn't mean the AI act itself couldn't do with a pause, albeit a narrow version of the what firms including ASML Holding NV, Airbus SE and Mistral AI called for in their 'stop the clock' letter published on Thursday, which demands that the president of the European Commission, Ursula von der Leyen, postpone rules they call 'unclear, overlapping and increasingly complex.' On that they have a point, but only for the portion of the 180-page act that was hastily added in the final negotiations to address 'general-purpose' AI models like ChatGPT. The act in its original form was initially drafted in 2021, almost two years before ChatGPT sparked the generative AI boom. It aimed to regulate high-risk AI systems used to diagnose diseases, give financial advice or control critical infrastructure. Those types of applications are clearly defined in the act, from using AI to determine a person's eligibility for health benefits to controlling the water supply. Before such AI is deployed, the law requires that it be carefully vetted by both the tech's creators and the companies deploying it. If a hospital wants to deploy an AI system for diagnosing medical conditions, that would be considered 'high-risk AI' under the act. The AI provider would not only be required to test its model for accuracy and biases, but the hospital itself must have humans overseeing the system to monitor its accuracy over time. These are reasonable and straightforward requirements. But the rules are less clear in a newer section on general-purpose AI systems, cobbled together in 2023 in response to generative AI models like ChatGPT and image-generator Midjourney. When those products exploded onto the scene, AI could suddenly carry out an infinite array of tasks, and Brussels addressed that by making their rules wider and, unfortunately, vaguer. The problems start on page 83 of the act in the section that claims to identify the point at which a general purpose system like ChatGPT poses a systemic risk: when it has been trained using more than 10 to the 25th power — or 10^25 — floating point operations (FLOPs), meaning the computers running the training did at least 10,000,000,000,000,000,000,000,000 calculations during the process. The act doesn't explain why this number is meaningful or what makes it so dangerous. In addition, researchers at the Massachusetts Institute of Technology have shown that smaller models trained with high-quality data can rival the capabilities of much larger ones. 'FLOPs' don't necessarily capture a model's power — or risk — and using them as a metric can miss the bigger picture. Such technical thresholds meanwhile aren't used to define what 'general-purpose AI' or 'high-impact capabilities' mean, leaving them open to interpretation and frustratingly ambiguous for companies. 'These are deep scientific problems,' says Petar Tsankov, chief executive officer of LatticeFlow AI, which guides companies in complying with regulations like the AI act. 'The benchmarks are incomplete.' Brussels shouldn't pause its entire AI law. It should keep on schedule to start enforcing rules on high-risk AI systems in health care and critical infrastructure when they roll out in August 2026. But the rules on 'general' AI come into effect much sooner — in three weeks — and those need time to refine. Tsankov recommends two more years to get them right. Europe's AI law could create some much-needed transparency in the AI industry, and were it to roll out next month, companies like OpenAI would be forced to share secret details of their training data and processes. That would be a blessing for independent ethics researchers trying to study how harmful AI can be in areas like mental health. But the benefits would be short-lived if hazy rules allowed companies to drag their heels or find legal loopholes to get out. A surgical pause on the most ambiguous parts of the act would help Brussels avoid the legal chaos, and make sure that when rules do arrive, they work. More From Bloomberg Opinion: This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of 'Supremacy: AI, ChatGPT and the Race That Will Change the World.' More stories like this are available on


Time of India
an hour ago
- Time of India
ChatGPT tests new ‘study together' feature: Here's what it may mean for users
ChatGPT creator OpenAI has reportedly started testing a new feature called 'study together'. The yet-to-launch feature aims to transform the way students learn and prepare for exams. As reported by TechCrunch, the unannounced feature was first spotted for Reddit users and it will appear as a new option in the popular AI chatbot 's left-hand sidebar. As per the report, on clicking the 'study together' option users will be directed to a new chat interface which will prominently feature a 'study together' prompt. However, the exact functionality and purpose of this new feature remain largely unclear, as OpenAI has yet to officially comment on its development or rollout. Speculation within the tech community suggests several possibilities for "study together." It could be designed as a collaborative tool, allowing multiple users to engage with ChatGPT simultaneously on a shared learning objective. Alternatively, it might function as a focus mode, providing a distraction-free environment for individual study sessions, perhaps with AI-generated prompts or summaries. Another theory posits that it could simulate a study partner, offering interactive Q&A, explanations, or even mock quizzes to aid in learning. OpenAI CEO Sam Altman asks users not to trust ChatGPT OpenAI CEO Sam Altman recently warned against the trust users place in the company's AI chatbot, ChatGPT. Speaking at the inaugural episode of OpenAI's official podcast, Altman said that he finds it 'interesting' when people put 'high degree of trust' in ChatGPT. Noting that AI is infallible and can produce misleading or false content, he said that it should not be trusted much. 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,' Altman said about OpenAI's own ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Tìm hiểu về tủ lạnh nhiều cửa có chức năng làm mát thông minh và tiết kiệm năng lượng LocalPlan Tìm Ngay Undo During the podcast, Altman also acknowledged that while ChatGPT continues to evolve with new features, the technology still has notable limitations that need to be addressed with honesty and transparency. Speaking about recent updates—including persistent memory and a potential ad-supported model—Altman noted that such advancements have raised fresh privacy concerns.


Mint
an hour ago
- Mint
Beware the market risk of AI-guided investment gaining mass popularity
As artificial intelligence (AI) expands its role in the financial world, regulators are confronted by the rise of new risks. It is a sign of a growing AI appetite among retail investors in India's stock market that the popular online trading platform Zerodha offers its users access to AI advice. It has deployed an open-source framework that can be used to obtain the counsel of Anthropic's Claude AI on how one could rejig one's stock portfolio, for example, to meet specified aims. Once set up, this AI tool can scan and study the user's holdings before responding to 'prompts' on the basis of its analysis. Something as general as 'How can I make my portfolio less risky?" will make it crunch risk metrics and spout suggestions far quicker than a human advisor would. One could even ask for specific stocks to buy that would maximize returns over a given time horizon. It may not be long before such tools gain sufficient popularity for them to play investment whisperers of the AI age. A recent consultation paper by the Securities and Exchange Board of India (Sebi)—which requires AI advisors to abide by Indian rules of investment advice and protect investor privacy—outlines a clear set of principles for the use of AI. Also Read: Siddharth Pai: India's IT firms have a unique opportunity in AI's trust deficit The legitimacy of such AI tools is not in doubt. Since the technology exists, we are at liberty to use it. And how useful they prove is for users to determine. In that context, Zerodha's move to arm its users with AI is clearly innovative. As for the competition posed by AI to human advisors, that too comes with the turf. Machines can do complex calculations much faster than we can and that's that. Of course, the standard caveat of investing applies: users take the advice of any chatbot at their own risk. Yet, it would serve us well to dwell on this aspect. While we could assume that AI models have absorbed most of what there is to know about financial markets, given how they are reputed to have devoured the internet, it is also clear that they are not infallible. For all their claims to accuracy, chatbots are found to 'hallucinate' (or make up 'facts') and misread queries without making an effort to get clarity. Even more unsettling is their inherent amorality. Tests have found that some AI models can behave in ways that would be scandalous if they were human; unless they are explicitly told to operate within a given set of rules, they may potentially overlook them to achieve their prompted goals. Asked to 'maximize profit," an AI bot might propose a path that runs rings around ethical precepts. Also Read: AI privacy paradox: Is India's Digital Personal Data Protection law ready for the chatbot revolution? Sebi's paper speaks of tests and audits, but are we really in a position to detect if an AI tool has begun to play fast and loose with market rules? Should AI advisors gain influence over millions of retail investors, they could conceivably combine it with their market overview to reach positions of power that would need tight regulatory oversight. If their analysis breaches privacy norms to draw upon the personal data of users, collusive strategies could plausibly be crafted that venture into market manipulation. AI toolmakers may claim to have made rule-compliant tools, but they must demonstrably minimize risks at their very source. Also Read: AI didn't take the job. It changed what the job is. For one, their bots should be fully up-to-date on the rulebooks of major markets like ours. For another, since we cannot expect retail users to include rule adherence in their prompts, AI tools should verifiably be preset to comply with the rules no matter what they're asked. Vitally, advisory tools must keep all user data confidential. AI holds promise as an aid, no doubt, but it mustn't blow it.