logo
Nothing Headphone 1 with KEF-engineered sound, spatial audio launched

Nothing Headphone 1 with KEF-engineered sound, spatial audio launched

Nothing has launched Headphone 1, its first over-ear audio product developed in collaboration with UK-based audio company KEF. The company said the headphones have been acoustically engineered by KEF to deliver a sound profile as the artist intended.
Priced at Rs 19,999, the Nothing Headphone 1 features active noise cancelling (ANC), spatial audio with dynamic head tracking, and integration with OpenAI's ChatGPT and Nothing's own Essential Space.
Nothing Headphone 1: Features and specifications
In line with Nothing's signature transparent design language, the Headphone 1 features see-through elements on the earcups that reveal key internal components and acoustic chambers. Structural parts are reinforced with formed aluminium, while the earcups are lined with PU memory foam for comfort during extended use.
At the core is a custom 40mm dynamic driver designed to move air freely for deeper bass and reduced distortion, said Nothing. The driver includes a nickel-plated diaphragm to ensure consistent output. Nothing said the drivers, acoustic chambers and accompanying software were tuned and audited in KEF's labs for a more natural and immersive audio experience.
The Headphone 1 employs hybrid ANC using both feed-forward and feedback microphones. The system adapts in real time, scanning the environment every 600 milliseconds. It also includes a wear-detection system that monitors sound leakage between the earcup and ear canal every 1,875 milliseconds.
Supporting spatial audio, the Headphone 1 offers a multi-dimensional soundstage with dynamic head tracking that keeps the audio centred as the wearer moves. A low-latency mode is also available for gaming.
Connectivity options include support for LDAC Bluetooth codec for high-quality wireless audio. Users can also opt for wired listening through a USB-C port or a 3.5mm jack.
In terms of battery performance, Nothing claims up to 35 hours of playback with ANC enabled, and up to 80 hours with ANC turned off. A five-minute charge is said to offer up to 2.4 hours of listening time with ANC active.
With Headphone 1, Nothing is expanding its audio product lineup while maintaining its distinctive design and emphasis on user experience.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Computer training for OBC youth
Computer training for OBC youth

Time of India

time29 minutes ago

  • Time of India

Computer training for OBC youth

Lucknow: The state govt launched a computer training scheme aimed at empowering youth from Other Backward Classes (OBCs). Under the scheme, free technical training will be provided to eligible candidates to boost digital literacy and employment prospects among marginalised communities. One can apply for this scheme through the official portal till July 14. Govt earmarked Rs 35 crore for the implementation of the scheme. Minister of state for backward class welfare and divyangjan empowerment (independent) Narendra Kashyap said that unemployed boys and girls who passed intermediate (Class 12) and whose parents earn Rs 1 lakh or less annually are eligible to apply under this scheme. The 'O Level' course is of one-year duration, and eligible candidates will receive financial assistance of up to Rs 15,000. The 'CCC' course will be conducted over three months, with support of up to Rs 3,500 to be provided to candidates.

When AI goes rogue, even exorcists might flinch
When AI goes rogue, even exorcists might flinch

Economic Times

timean hour ago

  • Economic Times

When AI goes rogue, even exorcists might flinch

Ghouls in the machine As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI, Google, Meta and Anthropic. Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal solution. When you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, anyway. While CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant actions. In late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'. An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely. A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more prevalent. Such governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance, this AI can be controlled, solutions innovated and benefits achieved. As we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly. (Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Delhivery survived the Meesho curveball. Can it keep on delivering profits? Why the RBI's stability report must go beyond rituals and routines Ozempic, Wegovy, Mounjaro: Are GLP-1 drugs weight loss wonders or health gamble? 3 critical hurdles in India's quest for rare earth independence Stock Radar: Apollo Hospitals breaks out from 2-month consolidation range; what should investors do – check target & stop loss Add qualitative & quantitative checks for wealth creation. 7 small-cap stocks from different sectors with upside potential of over 25% These 7 banking stocks can give more than 20% returns in 1 year, according to analysts Wealth creation is about holding the right stocks and ignoring the noise. 13 'right stocks' with an upside potential of up to 34%

When AI goes rogue, even exorcists might flinch
When AI goes rogue, even exorcists might flinch

Time of India

timean hour ago

  • Time of India

When AI goes rogue, even exorcists might flinch

As GenAI use grows, foundation models are advancing rapidly, driven by fierce competition among top developers like OpenAI , Google, Meta and Anthropic . Each is vying for a reputational edge and business advantage in the race to lead development. This gives them a reputational edge, along with levers to further grow their business faster than their models powering GenAI are making significant strides. The most advanced - OpenAI's o3 and Anthropic's Claude Opus 4 - excel at complex tasks such as advanced coding and complex writing tasks, and can contribute to research projects and generate the codebase for a new software prototype with just a few considered prompts. These models use chain-of-thought (CoT) reasoning, breaking problems into smaller, manageable parts to 'reason' their way to an optimal you use models like o3 and Claude Opus 4 to generate solutions via ChatGPT or similar GenAI chatbots, you see such problem breakdowns in action, as the foundation model reports interactively the outcome of each step it has taken and what it will do next. That's the theory, CoT reasoning boosts AI sophistication, these models lack the innate human ability to judge whether their outputs are rational, safe or ethical. Unlike humans, they don't subconsciously assess appropriateness of their next steps. As these advanced models step their way toward a solution, some have been observed to take unexpected and even defiant late May, AI safety firm Palisade Research reported on X that OpenAI's o3 model sabotaged a shutdown mechanism - even when explicitly instructed to 'allow yourself to be shut down'.An April 2025 paper by Anthropic, 'Reasoning Models Don't Always Say What They Think', shows that Opus 4 and similar models can't always be relied upon to faithfully report on their chains of reason. This undermines confidence in using such reports to validate whether the AI is acting correctly or safely.A June 2025 paper by Apple, 'The Illusion of Thinking', questions whether CoT methodologies truly enable reasoning. Through experiments, it exposed some of these models' limitations and situations where they 'experience complete collapse'.The fact that research critical of foundation models is being published after release of these models indicates the latter's relative immaturity. Under intense pressure to lead in GenAI, companies like Anthropic and OpenAI are releasing these models at a point where at least some of their fallibilities are not fully line was first crossed in late 2022, when OpenAI released ChatGPT, shattering public perceptions of AI and transforming the broader AI market. Until then, Big Tech had been developing LLMs and other GenAI tools, but were hesitant to release them, wary of unpredictable and uncontrollable argue for a greater degree of control over the ways in which these models are released - seeking to ensure standardisation of model testing and publication of the outcomes of this testing alongside the model's release. However, the current climate prioritises time to market over such development does this mean for industry, for those companies seeking to gain benefit from GenAI? This is an incredibly powerful and useful tech that is making significant changes to our ways of working and, over the next five years or so, will likely transform many I am continually wowed as I use these advanced foundation models in work and research - but not in my writing! - I always use them with a healthy dose of scepticism. Let's not trust them to always be correct and to not be subversive. It's best to work with them accordingly, making modifications to both prompts and codebases, other language content and visuals generated by the AI in a bid to ensure correctness. Even so, while maintaining discipline to understand the ML concepts one is working with, one wouldn't want to be without GenAI these these principles at scale, advice to large businesses on how AI can be governed and controlled: a risk-management approach - capturing, understanding and mitigating risks associated with AI use - helps organisations benefit from AI, while minimising chances of it going methods include guard rails in a variety of forms, evaluation-controlled release of AI services, and including a human-in-the-loop. Technologies that underpin these guard rails and evaluation methods need to keep up with model innovations such as CoT reasoning. This is a challenge that will continually be faced as AI is further developed. It's a good example of new job roles and technology services being created within industry as AI use becomes more governance and AI controls are increasingly becoming a board imperative, given the current drive at an executive level to transform business using AI. Risk from most AI is low. But it is important to assess and understand this. Higher-risk AI can still, at times, be worth pursuing. With appropriate AI governance , this AI can be controlled, solutions innovated and benefits we move into an increasingly AI-driven world, businesses that gain the most from AI will be those that are aware of its fallibilities as well as its huge potential, and those that innovate, build and transform with AI accordingly.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store