
OpenAI pauses Jony Ive partnership marketing in trademark dispute
OpenAI last month announced it was buying io Products, a product and engineering company co-founded by Jony Ive, the former senior vice president of industrial design and chief design officer at Apple,** in a deal valued at nearly $6.5 billion (€5.6 billion).
A competing firm called IYO that pitched an AI hardware to OpenAI CEO Sam Altman's personal firm and Ive's design firm launched a trademark complaint against the deal.
US District Judge Trina Thompson ruled late Friday that IYO, a Google-backed hardware start-up,** has a strong enough trademark infringement case to proceed to a hearing in October.
Until then, she ordered Altman, Ive and OpenAI to refrain from 'using the IYO mark, and any mark confusingly similar thereto, including the IO mark in connection with the marketing or sale of related products.'
'IYO will not roll over'
OpenAI responded by scrubbing its website of mentions of the new venture, saying instead that the page 'is temporarily down due to a court order.
'We don't agree with the complaint and are reviewing our options," the company added.
IYO CEO Jason Rugolo applauded the ruling Monday in a written statement to the Associated Press that said the start-up will aggressively protect its brand and tech investments.
'IYO will not roll over and let [Altman] and [Ive] trample on our rights, no matter how rich and famous they are,' Rugolo said.
Altman said in a June 12 court filing that he and Ive decided on the io name for their collaboration "because it is a common phrase for 'input/output'" and that their intent with the collaboration "was, and is, to create products that go beyond traditional products and interfaces".
He added that they received the io.com domain name in August 2023.
'Raising the issue of our name in bad faith'
A file sent to the court from the OpenAI team alleges that Rugolo approached OpenAI several times for either funding, propositions to sell the company for $200 million or to ask for ways the companies could work together.
After io was launched on May 21, the court filing says Rugolo contacted OpenAI's Tang Tan, io's chief hardware officer, congratulating them on the launch.
When Tan said he did not want to pursue a partnership, Rugolo then raised an issue with the name for the first time.
"I was surprised to receive this email," Tan wrote in a case declaration. "Mr. Rugolo had never mentioned any issues with the io name in any of our prior communications over the past several weeks.
"It appeared to me that Mr. Rugolo was raising the issue of our name in bad faith to try to force us to do a deal with his company," he said.
What are Altman and Ive building?
In a filing to the court, Altman and Ive's lawyers say that their io project is not working on an in-ear or wearable AI device like IYO is.
"io is at least a year away from offering any goods or services, and the first product it intends to offer is not an in-ear device like the one Plaintiff is offering for 'presale' (but which is also still at least months away from its claimed release date)," the filing says.
Altman previously told OpenAI employees that the io prototype would be able to fit in a pocket or sit on a desk.
The device would eventually be a third one that users could have in addition to their smartphone and laptop.
The court filing from OpenAI also says io developed several prototypes, like objects that were "desktop-based and mobile, wireless and wired, wearable and portable".
As part of this effort, io bought a "wide range" of earbuds, hearing aids and "at least 30" different headphone sets from companies, including IYO.
The order for IYO's One earbuds asked for a downpayment of $69 and would be shipped in the winter of 2024, but was never fulfilled by the company, the case added.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mediapart
15 hours ago
- Mediapart
The man thought to be the last walking newspaper seller in Paris
The cookies and similar technologies we use on Mediapart are of different natures and allow us to pursue different purposes. Some are necessary for the functioning of the site and the mobile application (you cannot refuse them). Others are optional but help to facilitate your experience as a reader and in some way support Mediapart. You can refuse or accept them below, depending on their purpose. Do you agree that Mediapart uses cookies or similar technologies for the following purposes ? You can make your choice, for each category, by enabling or disabling the switch button. Mandatory for the operation of the site or application Subscriber login, anonymized audience measurement, sending of push notifications, tracking of failures, highlighting of our services these tools are necessary to track the activity of our services and their proper functioning. read more Here are the various cookies and similar technologies included in this category : Authentication cookies : subscriber login. : subscriber login. AT Internet : anonymized audience measurement : anonymized audience measurement Display of multimedia editorial content : videos (YouTube, Dailymotion, Vimeo, INA), social networks (Facebook, Instagram, Pinterest, Twitter), documents (Scribd, Document Cloud, Slideshare), sounds (SoundCloud, Spotify, Deezer), maps (Google Maps, Mapbox, CartoDB, uMap), infographics (Highcharts, GitHub, Datawrapper, Flourish, Infogram, ThingLink, jQuery, Google Fonts, Bootstrap), live blogs (24liveblog, CoverItLive), media integration support in Journal and Club publications (Embedly). : videos (YouTube, Dailymotion, Vimeo, INA), social networks (Facebook, Instagram, Pinterest, Twitter), documents (Scribd, Document Cloud, Slideshare), sounds (SoundCloud, Spotify, Deezer), maps (Google Maps, Mapbox, CartoDB, uMap), infographics (Highcharts, GitHub, Datawrapper, Flourish, Infogram, ThingLink, jQuery, Google Fonts, Bootstrap), live blogs (24liveblog, CoverItLive), media integration support in Journal and Club publications (Embedly). Typeform : optional questionnaires to collect readers opinions on our digital products. : optional questionnaires to collect readers opinions on our digital products. Datadog (only on the website) : technical indicators and load balancing. : technical indicators and load balancing. Selligent (only on the website) : communication with the subscriber, highlighting of services, offers and benefits. : communication with the subscriber, highlighting of services, offers and benefits. Batch (only on the app) : sending push notifications and in-app messages. : sending push notifications and in-app messages. Firebase Cloud Messaging (only on app) : required for push notifications to work on Android. : required for push notifications to work on Android. Microsoft App Center (only on app) : app update and failure tracking system. Statistics These tools allow us to collect statistics on site and mobile application traffic to understand usage, detect possible problems and optimize the ergonomics of our products. read more These are the third-party tools included in this category : AT Internet : audience measurement related to subscriber ID. : audience measurement related to subscriber ID. CrazyEggs (only on website) : customer journey analysis. : customer journey analysis. Nonli (only on website) : helps our social network team to publish our contents on social networks. : helps our social network team to publish our contents on social networks. Qiota (available only on the website) : management of the datawall system. Advertising retargeting There is no advertising on Mediapart. But we do promote our content and services on other sites and social networks. For this, we use technologies made available by some advertising companies. read more These are the third-party tools included in this category : Facebook (only on the website) : audience targeting on social networks to promote Mediapart. Content Access Management We use the Qiota service from Opper Marketing Suite to configure the activation of a datawall on certain content (available only on the website). This system, intended for non-subscribed users, requires the input of an email address to access the relevant content. By providing this information, the user consents to its collection, storage, and use for statistical purposes. In accordance with the applicable regulations, users have the right to access, rectify, and delete their data, which they can exercise by contacting dpo@ Save and close


Sustainability Times
a day ago
- Sustainability Times
'They're Hacking the AI to Approve Their Lies': Scientists Busted Embedding Hidden Prompts to Trick Systems Into Validating Fake Studies
IN A NUTSHELL 🔍 Investigations by Nikkei Asia and Nature reveal hidden prompts in studies aiming to manipulate AI review systems. and reveal hidden prompts in studies aiming to manipulate AI review systems. 🌐 Approximately 32 studies from 44 institutions worldwide were identified with these unethical practices, causing significant concern. ⚠️ The over-reliance on AI in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. in peer review raises ethical questions, as some reviewers may bypass traditional scrutiny. 🔗 Experts call for comprehensive guidelines on AI use to ensure research integrity and prevent manipulative practices. The world of scientific research is facing a new, controversial challenge: the use of hidden prompts within scholarly studies intended to manipulate AI-driven review systems. This revelation has sparked significant debate within the academic community, as it sheds light on potential ethical breaches and the evolving role of technology in research validation. As scientists grapple with these issues, it is crucial to understand the implications of these practices on the trustworthiness of scientific findings and the integrity of academic publications. Hidden Messages in Studies: A Startling Discovery Recent investigations by Nikkei Asia and Nature have uncovered instances of hidden messages within academic studies. These messages, often concealed in barely visible fonts or written in white text on white backgrounds, are not meant for human reviewers but target AI systems like Large Language Models (LLMs) to influence their evaluations. Such practices have raised alarms, as they attempt to secure only positive assessments for research submissions. Approximately 32 studies have been identified with these manipulative prompts. These studies originated from 44 institutions across 11 countries, highlighting the global reach of this issue. The revelation has prompted the removal of these studies from preprint servers to maintain the integrity of the scientific process. The use of AI in peer review, intended to streamline the evaluation process, is now under scrutiny for its potential misuse and ethical implications. '$100 Million Vanished and Nothing Flew': DARPA's Canceled Liberty Lifter Seaplane Leaves Behind a Trail of Broken Dreams and Game-Changing Tech The Broader Implications of AI in Peer Review The discovery of hidden prompts in studies not only exposes unethical practices but also raises questions about the reliance on AI for peer review. While AI can assist in managing the growing volume of research, it appears that some reviewers may be over-relying on these systems, bypassing traditional scrutiny. Institutions like the Korea Advanced Institute of Science and Technology (KAIST) prohibit AI use in review processes, yet the practice persists in some quarters. Critics argue that these hidden prompts are symptomatic of systemic problems within academic publishing, where the pressure to publish can outweigh ethical considerations. The use of AI should be carefully regulated to prevent such manipulations, ensuring that peer review remains a rigorous and trustworthy process. As the academic community grapples with these challenges, it becomes evident that adherence to ethical standards is crucial in maintaining the credibility of scientific research. 'They're Turning Pollution Into Candy!': Chinese Scientists Stun the World by Making Food from Captured Carbon Emissions The Ethical Imperative: Why Science Must Avoid Deception Science is fundamentally built on trust and ethical integrity. From technological advancements to medical breakthroughs, the progress of society hinges on the reliability of scientific findings. However, the temptation to resort to unethical shortcuts, such as AI manipulation, poses a threat to this foundation. The scientific community must resist these temptations to preserve the credibility of their work. The pressures facing researchers, including increased workloads and heightened scrutiny, may drive some to exploit AI. Yet, these pressures should not justify compromising ethical standards. As AI becomes more integrated into research, it is vital to establish clear regulations governing its use. This will ensure that science remains a bastion of truth and integrity, free from deceptive practices that could undermine public trust. 'They Cloned a Yak in the Himalayas!': Chinese Scientists Defy Nature with First-Ever Livestock Copy at 12,000 Feet Charting a Course Toward Responsible AI Use The integration of AI into scientific processes demands careful consideration and responsible use. As highlighted by Hiroaki Sakuma, an AI expert, industries must develop comprehensive guidelines for AI application, particularly in research and peer review. Such guidelines will help navigate the ethical complexities of AI, ensuring it serves as a tool for advancement rather than manipulation. While AI holds the potential to revolutionize research, its implementation must be guided by a commitment to ethical standards. The scientific community must engage in ongoing dialogue to address the challenges posed by AI, fostering a culture of transparency and accountability. Only through these measures can science continue to thrive as a pillar of progress, innovation, and truth. As the intersection of AI and scientific research continues to evolve, how can the academic community ensure that technological advancements enhance rather than undermine the integrity of scientific inquiry? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.5/5 (26)


Fashion Network
a day ago
- Fashion Network
H&M Group's Linda Leopold steps down as head of AI strategy after seven years
Linda Leopold exits H&M Group after seven years leading its AI strategy, including its Responsible AI program. She now focuses on consulting, writing, and speaking on the ethical implications of AI in tech, fashion, and beyond. H&M Group, the Swedish fashion giant known for its global retail footprint and tech-forward initiatives, has announced the departure of Linda Leopold, who served as Head of AI Strategy. After seven years in strategic leadership roles, Leopold is stepping down to focus on consulting, writing, and speaking engagements centered on artificial intelligence and its ethical development across industries. Leopold joined H&M Group in 2018 and held several key roles within the company's growing AI division. As Head of AI Policy, she played a critical role in launching and expanding the brand's Responsible AI program. Under her guidance, H&M Group established frameworks for digital ethics and adopted strategic approaches to implementing generative AI technologies. 'These years were extraordinary—not only because I had the opportunity to help shape H&M's AI direction, but also because I witnessed AI evolve at lightning speed,' Leopold wrote on LinkedIn. 'I'm particularly proud of building the Responsible AI program from the ground up and contributing to the global conversation on ethical AI.' Her leadership earned international recognition. In 2022, Forbes named her one of the world's nine most influential women in AI. Before her time at H&M Group, Leopold worked as an innovation strategist bridging fashion and technology and also served as editor-in-chief of the Scandinavian fashion and culture magazine Bon. 'Now it's time for the next chapter,' she added. 'With AI at such a pivotal point, I want to help guide its development across different industries and organizations.' Leopold's exit comes as H&M Group continues its push into digital innovation. Earlier this month, the brand launched a new denim capsule collection powered by digital twin technology —part of a larger strategy to integrate generative AI into storytelling and customer engagement. According to Chief Creative Officer Jörgen Andersson, the goal is to create emotional connections with consumers without diluting brand identity. The first drop debuted on July 2 via H&M's global online store, with more launches planned this fall. While investing in new technologies, H&M Group also faces mounting economic pressures. The company reported a 5% year-over-year decline in net sales for the second quarter, falling to SEK 56.7 billion. However, operating profit rose slightly to SEK 5.9 billion—beating analyst forecasts. The group also improved inventory management, though deeper price cuts are expected in the third quarter as customers become more cautious with spending. 'We're seeing greater price sensitivity among customers due to ongoing uncertainty,' Group CEO Daniel Erver said during the latest earnings call.