
Sam Altman calls Iyo lawsuit 'silly' after OpenAI scrubs Jony Ive deal from website
Altman said, in response to the suit, that Iyo CEO Jason Rugolo had been "quite persistent in his efforts" to get OpenAI to buy or invest in his company. In a post on X, Altman wrote that Rugolo is now suing OpenAI over the name in a case he described as "silly, disappointing and wrong."
The suit, earlier this month, stemmed from an announcement in May, when OpenAI said it was bringing on Apple designer Jony Ive by acquiring his artificial intelligence startup io in a deal valued at about $6.4 billion. Iyo alleged that OpenAI, Altman and Ive had engaged in unfair competition and trademark infringement and claimed that it's on the verge of losing its identity because of the deal.
OpenAI removed the blog post about the deal from its website, after a judge last week granted Iyo's request for a temporary restraining order to keep OpenAI and its associates "from using Plaintiff's IYO mark, and any mark confusingly similar thereto, including without limitation 'IO.'"
"This page is temporarily down due to a court order following a trademark complaint from iyO about our use of the name 'io,'" OpenAI says in a message that now appears at the link where the post had been. "We don't agree with the complaint and are reviewing our options."
On X, Altman posted screenshots of emails from Rugolo seeking investment and a transaction involving Iyo's intellectual property. Rugolo also wanted OpenAI to buy Iyo, Altman wrote.
Rugolo didn't immediately respond to a request for comment. But on X, he wrote that "there are 675 other two letter names they can choose that aren't ours."
The Iyo suit is among several legal challenges facing OpenAI, which is working to evolve its organizational structure to take on more capital as it builds out its AI models. OpenAI also is going up against The New York Times in a copyright infringement case, and separately against Elon Musk, who had helped start OpenAI as a nonprofit in 2015 and is now suing for breach of contract.
Iyo is accepting pre-orders for its Iyo One in-ear wearable device that contains 16 microphones. Ive hasn't released details about io's product plans, but Altman told The Wall Street Journal that io's inaugural device is not a smartphone.
Altman wrote in another Tuesday post that he wishes the Iyo team "the best building great products," and that "the world certainly needs more of that and less lawsuits."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
2 hours ago
- Forbes
Why Machines Aren't Intelligent
Abstract painting of man versus machine, cubism style artwork. Original acrylic painting on canvas. OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO). Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine. As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving. CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months. Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic. Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed. These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans. With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly. Yet this would be a mistake. Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence. The fundamental distinction lies in several key characteristics that machines demonstrably lack. Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations. Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect. Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative. Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will. Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data. Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data, Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight. Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training. And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities. Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy. Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans. Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines. In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent". We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense. The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting. While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.
Yahoo
7 hours ago
- Yahoo
It's Never ‘Happened in the History of Tech to Any Company Before': OpenAI's Sam Altman Says ChatGPT is Growing at an Unprecedented Rate
When Sam Altman, CEO of OpenAI, described the extraordinary surge in user demand following a viral AI launch, he offered a candid glimpse into the operational pressures that come with leading the artificial intelligence (AI) revolution. Altman's remarks, delivered during a Bloomberg Originals interview, capture both the scale of recent events and the practical constraints that even the world's most advanced AI companies must contend with. Speaking about the massive spike in users resulting from the launch of Studio Ghibli-style images in a recent ChatGPT release, Altman recounted, 'This level of virality is an unusual thing. This last week, I don't think this has happened in the history of tech to any company before. I've seen viral moments, but I have never seen anyone have to deal with an influx of usage like this.' More News from Barchart OpenAI CEO Sam Altman Calls DeepSeek's Bluff: 'I Don't Think They Figured Out Something Way More Efficient' Vanguard Is Now the Top Investor in MicroStrategy Stock. Should You Buy MSTR Too? The Saturday Spread: Using Science to Pinpoint Empirically Enticing Trades in WMT, OKTA and RCAT Tired of missing midday reversals? The FREE Barchart Brief newsletter keeps you in the know. Sign up now! Altman's experience, while anecdotal, is rooted in the realities of managing systems that, in a matter of hours, can attract millions of new users. When pressed on the numbers, Altman confirmed that OpenAI added more than a million users in just a few hours — an unprecedented feat even by the standards of Silicon Valley. The technical demands of such growth are immense. Altman explained that generating images with the latest AI models is a computationally intensive process. To cope with the surge, OpenAI had to divert compute resources from research and slow down other features, highlighting the finite nature of their infrastructure. 'It's not like we have hundreds of thousands of GPUs sitting around spinning idly,' he noted, underscoring the limits faced even by leading AI firms. Altman's authority on these matters is well established. As the architect behind OpenAI's rise, he has overseen the development and deployment of some of the most influential AI systems in the world. His leadership has been marked by a willingness to confront both the opportunities and the constraints of large-scale AI. The decisions to borrow compute capacity and restrict certain features reflect a pragmatic approach to resource management — a challenge that is increasingly central as AI adoption accelerates. The quote also reveals Altman's forward-looking mindset. He described reviewing a list of planned feature launches and realizing that, without additional compute resources, not all could be delivered as intended. 'More compute means we can give you more AI,' he concluded, succinctly connecting infrastructure investment to the pace of innovation. Altman's comments resonate in a market environment where demand for AI services routinely outstrips supply. The rapid adoption of generative AI tools has forced companies to rethink their infrastructure strategies, driving massive investments in data centers, GPUs, and cloud capacity. Industry observers note that such surges in usage are likely to become more common as AI applications proliferate across sectors. In sum, Sam Altman's reflections on OpenAI's viral growth episode provide a window into the operational realities of modern AI development. His experience and measured responses reinforce his reputation as a leader capable of steering his company through both the promise and the growing pains of technological transformation. On the date of publication, Caleb Naysmith did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on


Android Authority
9 hours ago
- Android Authority
This is why I use two separate ChatGPT accounts
Calvin Wankhede / Android Authority I'll admit it: I'm a bit of a recovering AI addict. While I've had mixed feelings about AI from the start, as someone who spends a lot of time lost in thought, I've found it can be a useful tool for ideation, proofreading, entertainment, and much more. Recently, I've started scaling back my usage for reasons beyond the scope of this article, but for a while, I actually had two paid ChatGPT accounts. I know what you're thinking, and you're right, it's a bit excessive. Still, in some cases, it really can make sense to have two accounts. Would you ever consider having two AI accounts at once? 0 votes Yes, it's smart to seperate business and personal. NaN % Yes, but only if it's two different AI tools. NaN % No, it's a waste of resources and I get by fine with what I have. NaN % Other (Tell us in the comments) NaN % It all started when I found myself constantly hitting usage limits for my personal projects and entertainment, leaving me in a lurch when I needed AI for work-related tasks. For those who don't know, the ChatGPT Plus tier has different limits depending on the model. Some like the basic GPT 4o are virtually unlimited, while others have a firm daily or weekly count. For example, GPT 03 lets you send 100 messages a week, while 04-mini-high gives you 100 messages a day, and so 04-mini gives you 300 a day. I tend to rely the most on 03 and 04-mini-high outside of basic stuff like editing, because it is actually willing to tell you that you're wrong, unlike many of the other models that are people-pleasers to the extreme. Realizing I was blowing through my message limits long before the week was up, I immediately started considering my options, including adding a Gemini subscription instead of ChatGPT. Truthfully, I had tried both before and always found myself coming back to ChatGPT, so the decision was basically made for me. At that point, I began manually migrating some of my old chats over to the new account, basically copying and pasting core logs so ChatGPT and deleting records from my original mixed-use account. As a freelancer, my goal was to make sure anything related to clients was separated from my personal projects, which were mostly entertainment or experimental (like messing around with the API and similar tools just to learn). It wasn't even just about the limits. I found this separation helpful for more than just avoiding blowing through my limits on the wrong thing. As you might know, ChatGPT can learn your preferences. It's not exactly learning or memory in the traditional sense, but instead it basically creates an abstract pattern of your communication styles and preferences. Let's just say my way of talking about personal matters is very different from my professional voice. Lots of cursing and the like. After splitting my usage, I noticed that ChatGPT actually became better suited for the specific tasks I was performing on each account, as it understood my preferences for each use case a little better. That's probably an oversimplification of how ChatGPT works, but you get the idea. These days, I no longer pay for two accounts since I don't rely as heavily on ChatGPT or any AI tool anymore, but it's useful to keep my old logs around, and so I still have a ChatGPT Plus account for business and another free account that is for personal use. This way, I also retain the option of renewing my paid subscription if my usage habits change again in the future. How do you sign up for two accounts, and is this a TOS violation? Calvin Wankhede / Android Authority Think you could benefit from a second account? Signing up for two accounts is easy as long as you have at least two different email addresses. For payment, I used two different credit or bank cards, though it's unclear if that's really necessary. The bigger question is if it actually okay to do this, or will your accounts get suspended for violating policy? When I first considered this, I did my research. According to the Terms of Service (TOS), there's no firm rule against having two accounts as long as you aren't purposely trying to circumvent usage limits. My first thought was, 'Well, I kind of am' — after all, running out of limits was a big part of my problem. Still, by separating accounts, I was doing more than just trying to increase my limits. By dividing business and personal/entertainment uses, I was also organizing information better, and I was making sure I didn't use up all my limits on personal stuff that would hurt my work productivity. Before, I'd burn through my limits pretty quickly on silly time-wasting stuff like writing alternate timeline fiction and other entertainment. Ultimately, having two accounts can be a bit of a gray area, but as long as you're careful about how and why you use each account, it's not technically against the TOS. For what it's worth, ChatGPT agrees — but with some caveats. As the AI explains, two accounts are fine if: Your main reason for separating is genuinely to keep business and personal activities distinct—billing, data, privacy, and not accidentally using up the business quota on personal stuff. This is a reasonable, defensible use. If you had one account and were hitting limits due to mixed usage, it's normal (and frankly smart) to create a second account for business, especially if your work depends on reliable access. As noted by the ChatGPT bot itself, the TOS is mainly aimed at stopping people from abusing the system by creating multiple accounts to stack free or paid uses, or for heavy API stacking. Reading the actual TOS seems to give the same picture as well. Could this kind of 'gray area' usage still attract attention from ChatGPT staff? Maybe, but as long as you're genuinely separating your use cases, there shouldn't be any major issues. In fact, it's common practice to create accounts specifically for business use, including for tax purposes, and so I'd wager this is probably more common than many realize.