logo
OpenAI chairman says training your own AI model is a 'good way to burn through millions'

OpenAI chairman says training your own AI model is a 'good way to burn through millions'

There's a scene in "The Dark Knight" where the Joker sets ablaze a massive pile of money. Deciding to develop your own frontier AI model today may be a similar exercise in burning cash — just ask OpenAI chairman Bret Taylor.
Taylor, who has worked for three companies that have trained LLMs, including Google, Facebook, and OpenAI, called training new AI models a "good way to burn through millions of dollars."
On a recent episode of the Minus One podcast, Taylor advised AI founders to build services and use-cases, but not new frontier models entirely.
"Unless you work at OpenAI or Anthropic or Google or Meta, you're probably not building one of those," said Taylor, who also cofounded Sierra AI. "It requires so much capital that it will tend towards consolidation."
That high bar of capital has stopped any "indie data center market" from forming, Taylor said, because it simply costs too much.
Taylor advised that founders work with the AI juggernauts instead — which, it's worth noting, is something that AI giants like OpenAI, where he's chairman, would directly benefit from. OpenAI sells "tokens" to access its API, which developers can build into their applications and programs.
While the American LLM market remains largely consolidated, international players have tested Taylor's theory. In January, DeepSeek released its R1 reasoning model and a corresponding chatbot. DeepSeek used fewer, less advanced chips to build its LLM, minimizing capital costs.
The Chinese AI app shot to No. 1 on the App Store charts, surpassing ChatGPT and igniting a debate in tech and on Wall Street about whether tech giants were overspending on AI model development.
In his podcast interview, Taylor laid out other paths that entrepreneurs could take in the AI market, rather than training a new model. One was the "AI tools market."
"This is the proverbial pickaxes in the gold rush," Taylor said. "It's a dangerous space because I think there's a lot of things that are scratching an itch today that the foundation model providers might do tomorrow."
Entrepreneurs could also try to build what Taylor called an "applied AI company."
"What were SaaS applications in 2010 will be agent companies in 2030, in my opinion," Taylor said.
Building a model from scratch, though, is a sure-fire way to "destroy your capital," Taylor said. He called handmade models "fast-depreciating assets," and not cheap ones either, costing the builder millions of dollars.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

Forbes

time36 minutes ago

  • Forbes

OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.

As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.

U.S., China discuss tariffs in effort to extend truce, Reuters says
U.S., China discuss tariffs in effort to extend truce, Reuters says

Business Insider

timean hour ago

  • Business Insider

U.S., China discuss tariffs in effort to extend truce, Reuters says

U.S. and Chinese negotiators are meeting to tackle economic disputes, aiming to extend a truce meant to keep higher tariffs at bay, David Lawder of Reuters reports. China currently faces an August 12 deadline to reach a tariff agreement with President Donald Trump's administration. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence.

'Quishing' scams dupe millions of Americans as cybercriminals exploit QR codes
'Quishing' scams dupe millions of Americans as cybercriminals exploit QR codes

NBC News

timean hour ago

  • NBC News

'Quishing' scams dupe millions of Americans as cybercriminals exploit QR codes

QR codes were once a quirky novelty that prompted a fun scan with the phone. Early on, you might have seen a QR code on a museum exhibit and scanned it to learn more about the eating habits of the woolly mammoth or military strategies of Genghis Khan. During the pandemic, QR codes became the default restaurant menu. However, as QR codes became a mainstay in more urgent aspects of American life, from boarding passes to parking payments, hackers have exploited their ubiquity. 'As with many technological advances that start with good intentions, QR codes have increasingly become targets for malicious use. Because they are everywhere — from gas pumps and yard signs to television commercials — they're simultaneously useful and dangerous,' said Dustin Brewer, senior director of proactive cybersecurity services at BlueVoyant. Brewer says that attackers exploit these seemingly harmless symbols to trick people into visiting malicious websites or unknowingly share private information, a scam that has become known as 'quishing.' The increasing prevalence of QR code scams prompted a warning from the Federal Trade Commission earlier this year about unwanted or unexpected packages showing up with a QR code that when scanned 'could take you to a phishing website that steals your personal information, like credit card numbers or usernames and passwords. It could also download malware onto your phone and give hackers access to your device.' State and local advisories this summer have reached across the U.S., with the New York Department of Transportation and Hawaii Electric warning customers about avoiding QR code scams. The appeal to cybercriminals lies in the relative ease with which the scam operates: slap a fake QR code sticker on a parking meter or a utility bill payment warning and rely on urgency to do the rest. 'The crooks are relying on you being in a hurry and you needing to do something,' said Gaurav Sharma, a professor in the department of electrical and computer engineering at the University of Rochester. On the rise as traditional phishing fails Sharma expects QR scams to increase as the use of QR codes spreads. Another reason QR codes have increased in popularity with scammers is that more safeguards have been put into place to tamp down on traditional email phishing campaigns. A study this year from cybersecurity platform KeepNet Labs found that 26 percent of all malicious links are now sent via QR code. According to cybersecurity company, NordVPN, 73% of Americans scan QR codes without verification, and more than 26 million have already been directed to malicious sites. 'The cat and mouse game of security will continue and that people will figure out solutions and the crooks will either figure out a way around or look at other places where the grass is greener,' Sharma said. Sharma is working to develop a 'smart' QR code called a SDMQR (Self-Authenticating Dual-Modulated QR) that has built-in security to prevent scams. But first, he needs buy-in from Google and Microsoft, the companies that build the cameras and control the camera infrastructure. Companies putting their logos into QR codes isn't a fix because it can cause a false sense of security, and that criminals can usually simply copy the logos, he said. Some Americans are wary of the increasing reliance on QR codes. 'I'm in my 60s and don't like using QR codes,' said Denise Joyal of Cedar Rapids, Iowa. 'I definitely worry about security issues. I really don't like it when one is forced to use a QR code to participate in a promotion with no other way to connect. I don't use them for entertainment-type information.' Institutions are also trying to fortify their QR codes against intrusion. Natalie Piggush, spokeswoman for the Children's Museum of Indianapolis, which welcomes over one million visitors a year, said their IT staff began upgrading their QR codes a couple of years ago to protect against what has become an increasingly significant threat. 'At the museum, we use stylized QR codes with our logo and colors as opposed to the standard monochrome codes. We also detail what users can expect to see when scanning one of our QR codes, and we regularly inspect our existing QR codes for tampering or for out-of-place codes,' Piggush said. Museums are usually less vulnerable than places like train stations or parking lots because scammers are looking to collect cash from people expecting to pay for something. A patron at a museum is less likely to expect to pay, although Sharma said even in those settings, fake QR codes can be deployed to install malware on someone's phone. Apple, Android user trust is an issue QR code scams are likely to hit both Apple and Android devices, but iPhone users may be slightly more likely to fall victim to the crime, according to a study completed earlier this year by Malwarebytes. Users of iPhones expressed more trust in their devices than Android owners and that, researchers say, could cause them to let down their guard. For example, 70% of iPhone users have scanned a QR code to begin or complete a purchase versus 63% of Android users who have done the same. Malwarebytes researcher David Ruiz wrote that trust could have an adverse effect, in that iPhone users do not feel the need to change their behavior when making online purchases, and they have less interest in (or may simply not know about) using additional cybersecurity measures, like antivirus. Fifty-five percent of iPhone users trust their device to keep them safe, versus 50 percent of Android users expressing the same sentiment. Low investment, high return hacking tactic A QR code is more dangerous than a traditional phishing email because users typically can't read or verify the encoded web address. Even though QR codes normally include human-readable text, attackers can modify this text to deceive users into trusting the link and the website it directs to. The best defense against them is to not scan unwanted or unexpected QR codes and look for ones that display the URL address when you scan it. Brewer says cybercriminals have also been leveraging QR codes to infiltrate critical networks. 'There are also credible reports that nation-state intelligence agencies have used QR codes to compromise messaging accounts of military personnel, sometimes using software like Signal that is also open to consumers,' Brewer said. Nation-state attackers have even used QR codes to distribute remote access trojans (RATs) — a type of malware designed to operate without a device owner's consent or knowledge — enabling hackers to gain full access to targeted devices and networks. Still, one of the most dangerous aspects of QR codes is how they are part of the fabric of everyday life, a cyberthreat hiding in plain sight. 'What's especially concerning is that legitimate flyers, posters, billboards, or official documents can be easily compromised. Attackers can simply print their own QR code and paste it physically or digitally over a genuine one, making it nearly impossible for the average user to detect the deception,' Brewer said. Rob Lee, chief of research, AI, and emerging threats at the cybersecurity training focused SANS Institute, says that QR code compromise is just another tactic in a long line of similar strategies in the cybercriminal playbook. 'QR codes weren't built with security in mind, they were built to make life easier, which also makes them perfect for scammers,' Lee said. 'We've seen this playbook before with phishing emails; now it just comes with a smiley pixelated square. It's not panic-worthy yet, but it's exactly the kind of low-effort, high-return tactic attackers love to scale.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store