logo
Who wants to buy a piece of Mars?

Who wants to buy a piece of Mars?

The largest piece of Mars on Earth became the most valuable meteorite ever sold at auction at Sotheby's annual 'Geek Week,' and a full Ceratosaurus skeleton fetched $30 million. Next up for bidding: an Apple computer hand-built by Steve Jobs. The piece of Mars being auctioned through Sotheby's. Weighing in at 54 pounds, this is the largest piece of Mars on Earth, and is the most valuable Meteorite ever found. Photograph Courtesy of Sotheby's
In the end, the long-dead dinosaur outperformed the largest piece of Mars ever found on Earth.
At Sotheby's on Wednesday , an exhibition-ready, mounted skeleton of a 150-million-year-old dinosaur—a juvenile Ceratosaurus nasicornis—sold for $30.5 million (including fees and costs), far exceeding its estimate of $6 million. Meanwhile, the Mars-originating, 54-pound meteorite named NWA 16788 sold for $5.3 million—in sluggish bidding, without fees and costs included, it fetched just $300,000 over its upper estimate of $4 million. Still, it remains the most valuable meteorite ever sold at auction.
The Ceratosaurus, dating from the late Jurassic Period and originally found in Bone Cabin Quarry, Wyoming in 1996, measures around 6 feet 3 inches in height, and 10 feet 8 inches in length. Consisting of 139 original fossil bone elements with additional sculpted materials, the skeleton has a virtually complete skull and 43 present teeth. Collectors from 37 countries bid for it. ​​
Also among the 122 objects up for auction on Wednesday was the largest-known lunar sphere—at $825,500 setting the record for most valuable lunar meteorite ever sold at auction, and taking its place as the second most valuable meteorite ever sold at auction after the Martian meteorite. NWA 16788, The Largest Piece of Mars on Earth, est. $2,000,000-$4,000,000 is featured during Sotheby's "Geek Week" Sales in New York, NY, July 8, 2025 Photography by Efren Landaos/Sipa USA, AP Images
The original LED sign from SEGA's The Lost World: Jurassic Park Light Gun Arcade Game sold for around $20,000, the skull of a Pachycephalosaurus for $1.8 million, a Neanderthal tool set dated to around 400,000 years ago for $57,150, and the skeleton of a large cave bear found in Eastern Europe for $35,560.
Cassandra Hatton, Vice Chairman, Global Head, Science & Natural History, Sotheby's, said: 'These stellar results underscore a deep and enduring fascination and respect for the natural world—from the farthest reaches of space to the ancient depths of the Earth. What draws collectors is more than just a passion for science; it's a deep-seated curiosity about the forces that have shaped our planet and beyond.'
(Let's give Ceratoraurus a hand) A hot market for dinos
The winning, as yet anonymous, buyer of the Ceratosaurus intends to loan it to an institution, 'as is fitting for a specimen of this rarity and importance,' Sotheby's said in a statement.
'Whether they will reveal their identity is not something I have the answer to,' Hatton told National Geographic, adding that she wasn't surprised the Ceratosaurus had commanded such a high price. 'It's a beautiful fossil, rare and important. I think it more than deserves the price it sold for.' Detail of the piece of Mars being auctioned off at Sotheby's 'Geek Week.' Photograph Courtesy of Sotheby's
The bidding on the meteorite, Hatton said, had been slower because buyers were more tentative around something that has not had a comparable antecedent in the market. 'In the absence of a bidding precedent, you're going to look at the behavior of the other bidders,' she said. 'No one wants to be the person to make the first move.' In time, buyers may become as enthusiastic over meteorites as they are for dinosaur fossils and bones, Hatton said.
Last year at Sotheby's, billionaire Ken Griffin, founder and CEO of hedge fund Citadel, successfully bid $44.6 million for a 150 million-year-old, 11-foot tall, 27-foot long stegosaurus skeleton, named 'Apex'—the most valuable fossil ever sold at auction. It had been only expected to fetch $6 million.
Some have voiced concern over the high-priced market. Andre LuJan, president of the Association of Applied Paleontology, told The New York Times that the increasing prices of leases for land where such finds were made were harming both academic research and commercial operators.
(Odd Martian meteorites traced back to largest volcanic structure in the solar system)
Hatton said both landowners and people who do the excavations should be paid 'properly' for their part in fossil discoveries—traditionally both parties have been 'cheated' in the process, she said.
It was important to 'diversify' the types of fossils coming to market, Hatton told National Geographic, as nuances within the field—'the Stegosaurus market is different to the T. rex market, which is different to the Ceratosaurus market'—inform not just pricing models, but such matters as how museums calculate insurance values. The largest piece of Mars on Earth
The 'incredibly rare' NWA 16788 meteorite measures 14¾ x 11 x 6 inches and was apparently blown off the surface of Mars, then traveled the 140 million miles to Earth, crashing into the Sahara.
Classified as an olivine-microgabbroic shergottite, a type of Martian rock formed from the slow cooling of Martian magma, it was discovered by a meteorite hunter in Niger's remote Agadez region in November 2023. Pieces of Mars are 'unbelievably rare,' Sotheby's said—of the more than 77,000 officially recognized meteorites, only 400 are Martian meteorites.
'This Martian meteorite is the largest piece of Mars we have ever found by a long shot,' Hatton told the AP . 'So it's more than double the size of what we previously thought was the largest piece of Mars.'
NWA 16788 is approximately 70 percent larger than the next largest piece of Mars found on Earth, and covered in a reddish-brown fusion crust, giving it 'a Martian hue.' Sotheby's said the meteorite had endured 'minimal terrestrial weathering,' and was 'a likely a relative newcomer here on Earth, having fallen from outer space rather recently.'
Prior to landing at Sotheby's, NWA 16788 was exhibited at the Italian Space Agency in Rome in 2024 and in a private gallery in Arezzo, Tuscany.
(Meteorites on Earth may be from an ancient crater on Mars)
On Thursday afternoon, the final sale of Geek Week will feature what Sotheby's says is 'the finest operational Apple-1 computer in existence,' from the first batch of 50 hand built by Steve Wozniak and Steve Jobs in 1976. The estimated auction price stands between $400,000 and $600,000.
Hatton declined to say what buyers could expect to bid on at Geek Week 2026.
'Space exploration was my first auction and my first passion,' she told National Geographic. 'I love the history of science and technology, the manuscripts, books, and Enigma machines. It's so great to get these objects in, and to tell their individual stories.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Synchron Debuts First Thought-Controlled iPad Experience Using Apple's New BCI Human Interface Device Protocol
Synchron Debuts First Thought-Controlled iPad Experience Using Apple's New BCI Human Interface Device Protocol

Business Wire

time10 hours ago

  • Business Wire

Synchron Debuts First Thought-Controlled iPad Experience Using Apple's New BCI Human Interface Device Protocol

NEW YORK--(BUSINESS WIRE)-- Synchron, a category-defining brain-computer interface (BCI) company, today released the first-ever public demonstration of an individual using an iPad controlled entirely by thought, leveraging Apple's built-in accessibility features and new Brain-Computer Interface Human Interface Device (BCI HID) protocol. The video features Mark, a participant in Synchron's COMMAND clinical study and a person living with ALS, who uses the company's implantable BCI to navigate the iPad home screen, open apps, and compose text all without using his hands, voice, or eyes. This moment follows Apple's announcement in May of a new BCI Human Interface Device (BCI HID) input protocol. With the new protocol, Apple's operating systems can leverage brain signals as a native input method for the first time. 'This is the first time the world has seen native, thought-driven control of an Apple device in action,' said Dr. Tom Oxley, CEO and Founder, Synchron. 'Mark's experience is a technical breakthrough, and a glimpse into the future of human-computer interaction, where cognitive input becomes a mainstream mode of control.' Mark's use of the iPad is enabled by Apple's built-in accessibility feature, Switch Control, and Synchron's Stentrode™ device, which detects motor intention from blood vessels within the brain. These signals are wirelessly transmitted to an external decoder, which interfaces directly with iPadOS through the new HID protocol. The system allows for closed-loop communication, where an iPad, iPhone or Apple Vision Pro shares contextual screen data with the BCI decoder to optimize real-time performance, enabling precise, intuitive control using just neural signals. 'When I lost the use of my hands, I thought I had lost my independence,' said Mark. 'Now, with my iPad, I can message my loved ones, read the news, and stay connected with the world, just by thinking. It's given me part of my life back.' Synchron was the first company to bring a permanently implantable BCI into clinical trials, and its endovascular approach avoids open brain surgery, making it uniquely positioned for real-world scalability. Today's demonstration marks a major advancement in assistive technology and a glimpse at the future of human-computer interaction. Synchron is continuing controlled rollouts of the BCI HID experience with clinical participants, with broader availability to come. This marks a critical step in making BCI technology practical, scalable, and integrated into the global consumer ecosystem, moving beyond clinical trials into everyday life. For more information and to view the video, visit our YouTube channel. About Synchron Synchron is the category-defining brain-computer interface (BCI) company pioneering implantable neurotechnology designed to restore autonomy and improve lives. Its mission is to bring the first commercially scalable BCI to millions of people with motor impairment. Synchron has completed two human clinical trials since 2019 and is preparing for a larger-scale study. The company's implantable BCI is now powered by Chiral AI™, a proprietary foundation model of cognition. With the BCI market projected to reach $400 billion (Morgan Stanley), Synchron is leading the field while prioritizing ethical development grounded in Cognitive Liberty and the protection of fundamental rights. Synchron is headquartered in New York. Learn more at and follow @synchroninc.

Despite Billions In Investment, AI Reasoning Models Are Falling Short
Despite Billions In Investment, AI Reasoning Models Are Falling Short

Forbes

time11 hours ago

  • Forbes

Despite Billions In Investment, AI Reasoning Models Are Falling Short

In early June, Apple released an explosive paper, The Illusion of Thinking: Understanding the Limitations of Reasoning Models via the Lens of Problem Complexity. It examines the reasoning ability of Large Reasoning Models (LRMs) such as Claude 3.7 Sonnet Thinking, Gemini Thinking, DeepSeek-R1, and OpenAI's o-series models — how they think, especially as problem complexity increases. The research community dug in, and the responses were swift. Despite the increasing adoption of Generative AI and the adoption and the presumption that AI will replace tasks and jobs at scale, these Large Reasoning Models are falling short. By definition, Large Reasoning Models (LRMs) are Large Language Models (LLMS) focused on step-by-step thinking. This is called Chain of Thought (CoT) which facilitates problem solving by guiding the model to articulate reasoning steps. Jing Hu, writer, researcher and author of 2nd Order Thinkers, who dissected the paper's findings remarked that "AI is just sophisticated pattern matching, no thinking, no reasoning" and 'AI can only do tasks accurately up to a certain degree of complexity.' As part of the study, researchers created a closed puzzle environment for games like Checkers Jumping, River Crossing, and Tower of Hanoi, which simulate varied conditions of complexity. Puzzles were applied across three stages of complexity ranging from the simplest to high complexity. Across all three stages of the models' performances the paper concluded: At 'Low Complexity', the regular models performed better than LRMs. Hu explained, 'The reasoning models were overthinking — wrote thousands of words, exploring paths that weren't needed, second-guessing correct answers and making things more complicated than they should have been.' In the Tower of Hanoi, a human can solve the puzzle within seven moves while Claude-3.7 Sonnet Thinking uses '10x more tokens compared to the regular version while achieving the same accuracy... it's like driving a rocket ship to the corner store.' At 'Medium Complexity', LRMs outperformed LLMs, revealing traces of chain of thought reasoning . Hu argues LRMs tended to explore wrong answers first before eventually finding the correct answer, however, she argues, 'these thinking models use 10-50x more compute power (15,000-20,000 tokes vs. 1,000-5000). Imagine paying $500 instead of $50 for a hamburger that tastes 10% better.' Hu says that this isn't an impressive breakthrough but reveal a complexity that is dressed to impress audiences and 'simple enough to avoid total failure.' At 'High Complexity', both LRMs and standard models collapse, and accuracy drops to zero. As the problems get more complex, the models simply stopped trying. Hu explains, referencing Figure 6 from Apple's paper, 'Accuracy starts high for all models on simple tasks, dips slowly then crashes to near zero at a 'critical point' of complexity. If this compared to the row displaying Token use, the latter rises as problems become harder ('models think more'), peaks, then drops sharply at the same critical point even if token budget is still available.' Hu explains models aren't scaling up their effort; rather they abandon real reasoning and output less. Gary Marcus is an authority on AI. He's a scientist and has written several books including The Algebraic Mind and Rebooting AI. He continues to scrutinize the releases from these AI companies. In his response to Apple's paper, he states, 'it echoes and amplifies the training distribution argument that I have been making since 1998: neural networks of various kinds can generalize within a training distribution of data they are exposed to, but their generalizations tend to break down outside that distribution.' This means the more edge cases introduced to these LRMs the more they will go off-track especially with problems that are very different from the training data. He also advises that LRMs have a scaling problem because 'the outputs would require too many output tokens' indicating the correct answer would be too long for the LRMs to produce. The implications? Hu advises, 'This comparison matters because it debunks hype around LRMs by showing they only shine on medium complexity tasks, not simple or extreme ones.' Why this Hedge Fund CEO Passes on GenAI Ryan Pannell is the CEO of Kaiju Worldwide, a technology research and investment firm specializing in predictive artificial intelligence and algorithmic trading. He plays in an industry that demands compliance and a stronger level of certainty. He uses Predictive AI, which is a type of artificial intelligence leveraging statistical analysis and machine learning to forecast based on patterns on historical data; unlike generative AI like LLM and LRM chatbots, it does not create original content. Sound data is paramount and for the hedge funds, they only leverage closed datasets, as Pannell explains, 'In our work with price, time, and quantity, the analysis isn't influenced by external factors — the integrity of the data is reliable, as long as proper precautions are taken, such as purchasing quality data sets and putting them through rigorous quality control processes, ensuring only fully sanitized data are used.' The data they purchase — price, time, and quantity — are from three different vendors and when they compare their outputs, 99.999% of the time they all match. However, when there's an error — since some data vendors occasionally provide incorrect price, time, or quantity information — the other two usually point out the mistake. Pannell argues, 'This is why we use data from three sources. Predictive systems don't hallucinate because they aren't guessing.' For Kaiju, the predictive model uses only what it knows and whatever new data they collect to spot patterns they use to predict what will come next. 'In our case, we use it to classify market regimes — bull, bear, neutral, or unknown. We've fed them trillions of transactions and over four terabytes of historical price and quantity data. So, when one of them outputs 'I don't know,' it means it's encountered something genuinely unprecedented.' He claims that if it sees loose patterns and predicts a bear market with 75% certainty, it's likely correct, however, 'I don't know,' signals a unique scenario, something never seen in decades of market data. 'That's rare, but when it happens, it's the most fascinating for us,' says Pannell. In 2017, when Trump policy changes caused major trade disruptions, Pannell asserted these systems were not in place so the gains they made within this period of high uncertainty were mostly luck. But the system today, which has experienced this level of volatility before, can perform well, and with consistency. AI Detection and the Anomaly of COVID-19 Just before the dramatic drop of the stock market of February 2020, the stock market was still at an all-time high. However, Pannell noted that the system was signaling that something was very wrong and the strange behavior in the market kept intensifying, 'The system estimated a 96% chance of a major drop and none of us knew exactly why at the time. That's the challenge with explainability — AI can't tell you about news events, like a cruise ship full of sick people or how COVID spread across the world. It simply analyzes price, time and quantity patterns and predicts a fall based on changing behavior it is seeing, even though it has no awareness of the underlying reasons. We, on the other hand, were following the news as humans do.' The news pointed to this 'COVID-19' thing, at the time it seemed isolated. Pannell's team weren't sure what to expect but in hindsight he realized the value of the system: it analyzes terabytes of data and billions of examinations daily for any recognizable pattern and sometimes determines what's happening matches nothing it has seen before.' In those cases, he realized, the system acted as an early warning, allowing them to increase their hedges. With the billions of dollars generated from these predictive AI systems, their efficacy drops off after a week to ~21%-17% and making trades outside this range is extremely risky. Pannell suggests he hasn't seen any evidence suggesting AI — of any kind — will be able to predict financial markets with accuracy 90 days, six months or a year in advance. 'There are simply too many unpredictable factors involved. Predictive AI is highly accurate in the immediate future — between today and tomorrow — because the scope of possible changes is limited.' Pannell remains skeptical on the promises of LLMs and the current LRMS for his business. He describes wasting three hours being lied to by ChatGPT 4.o when he was experimenting with using it to architecting a new framework. He was blown away the system had substantially increased its functionality at first, but he determined after three hours, it was lying to him the entire time. He explains, 'When I asked, 'Do you have the capability to do what you just said?' the system responded it did not and added that its latest update had programmed it to keep him engaged over giving an honest answer.' Pannell adds, 'Within a session, an LLM can adjust when I give it feedback, like 'don't do this again,' but as soon as the session goes for too long, it forgets and starts lying again.' He also points to ChatGPT's memory constraints. He noted it performs really well for the first hour but in the second or third hour, ChatGPT starts forgetting earlier context, making mistakes and dispensing false information. He described it to a colleague this way, 'It's like working with an extremely talented but completely drunk programmer. It does some impressive work, but it also over-estimates its capabilities, lies about what it can and can't do, delivers some well-written code, wrecks a bunch of stuff, apologizes and says it won't do it again, tells me that my ideas are brilliant and that I am 'right for holding it accountable', and then repeats the whole process over and over again. The experience can be chaotic.' Could Symbolic AI be the Answer? Catriona Kennedy holds a Ph.D. in Computer Science from the University of Birmingham and is an independent researcher focusing on cognitive systems and ethical automation. Kennedy explains that automated reasoning has always been a branch of AI with the inference engine at the core, which applies the rules of logic to a set of statements that are encoded in a formal language. She explains, 'An inference engine is like a calculator, but unlike AI, it operates on symbols and statements instead of numbers. It is designed to be correct.' It is designed to deduce new information, simulating the decision-making of a human expert. Generative AI, in comparison, is a statistical generator, therefore prone to hallucinations because 'they do not interpret the logic of the text in the prompt.' This is the heart of symbolic AI, one that uses an inference engine and allows for human experience and authorship. It is a distinct AI system from generative AI. The difference with Symbolic AI is the knowledge structure. She explains, 'You have your data and connect it with knowledge allowing you to classify the data based on what you know. Metadata is an example of knowledge. It describes what data exists and what it means and this acts as a knowledge base linking data to its context — such as how it was obtained and what it represents.' Kennedy also adds ontologies are becoming popular again. Ontology defines all the things that exist and the interdependent properties and relationships. As an example, animal is a class, and a sub class is a bird and a further sub-class is an eagle or robin. The properties of a bird: Has 2 feet, has feathers, and flies. However, what an eagle eats may be different from what a robin eats. Ontologies and metadata can connect with logic-based rules to ensure the correct reasoning based on defined relationships. The main limitation of pure symbolic AI is that it doesn't easily scale. Kennedy points out that these knowledge structures can become unwieldy. While it excels at special purpose tasks, it becomes brittle at very complex levels and difficult to manage when dealing with large, noisy or unpredictable data sets. What we have today in current LRMs has not yet satisfied these researchers that AI models are any closer to thinking like humans, as Marcus points out, 'our argument is not that humans don't have any limits, but LRMs do, and that's why they aren't intelligent... based on what we observe from their thoughts, their process is not logical and intelligent.' For Jing Hu, she concludes, "Too much money depends on the illusion of progress — there is a huge financial incentive to keep the hype going even if the underlying technology isn't living up to the promises. Stop the Blind worship of GenAI." (Note: Open AI recently raised $40billion with a post-money valuation of $300billion.) For hedge fund CEO, Ryan Pannell, combining generative AI (which can handle communication and language) with predictive systems (which can accurately process data in closed environments) would be ideal. As he explains, 'The challenge is that predictive AI usually doesn't have a user-friendly interface; it communicates in code and math, not plain English. Most people can't access or use these tools directly.' He opts for integrating GPT as an intermediary, 'where you ask GPT for information, and it relays that request to a predictive system and then shares the results in natural language—it becomes much more useful. In this role, GPT acts as an effective interlocutor between the user and the predictive model.' Gary Marcus believes by combining symbolic AI with neural networks — which is coined Neurosymbolic AI — connecting data to knowledge that leverage human thought processes, the result will be better. He explains that this will provide a robust AI capable of 'reasoning, learning and cognitive modelling.' Marcus laments that for four decades, the elites that have evolved machine-learning, 'closed-minded egotists with too much money and power' have 'tried to keep a good idea, namely neurosymbolic AI, down — only to accidentally vindicate the idea in the end." 'Huge vindication for what I have been saying all along: we need AI that integrates both neural networks and symbolic algorithms and representations (such as logic, code, knowledge graphs, etc.). But also, we need to do so reliably, and in a general way, and we haven't yet crossed that threshold.'

NASA skywatching tips for August include a Jupiter and Venus meetup
NASA skywatching tips for August include a Jupiter and Venus meetup

Digital Trends

timea day ago

  • Digital Trends

NASA skywatching tips for August include a Jupiter and Venus meetup

There's plenty of planetary action to be enjoyed in August, according to NASA's latest rundown of what to look out for in the sky this month. Highlights include a morning meetup between Jupiter and Venus, a chance to see the Perseid meteor shower, and a glimpse into the destiny of our own sun. Mars Mars is also viewable this month, in fact, it's the only planet that's visible in the early evening sky in August. You'll be able to spot it low in the west for about an hour after daylight starts to fade, though you'll have to look hard, as the glow of its characteristic salmon-pink color is now only about 60% as bright as it was in May. Saturn Later in the evening, at around 10 p.m., you'll be able to spot Saturn, and as the month goes on it'll appear a little earlier each evening. Look out for Saturn in the east after dark with the constellations Cassiopeia and Andromeda. The planet will appear to move toward the western part of the sky by dawn, so if you're an early riser, it's the perfect time to take a look. Jupiter and Venus 'The real highlight of August is the close approach of Jupiter and Venus,' NASA says. 'They shine brightly in the east before sunrise throughout the month.' The two planets appear far apart at the start of August, but as the days pass, they'll move closer together. 'They appear at their closest on the 11th and 12th — only about a degree apart. Their rendezvous happens against a backdrop of bright stars including Orion, Taurus, Gemini, and Sirius. A slim crescent moon joins the pair of planets after they separate again, on the mornings of the 19th and 20th,' NASA says. Perseids meteor shower The Perseids meteor shower is back again, and peaks overnight on August 12 and 13. But this year the moon is nearly full on the peak night, and so its brightness will impact the ability to see the meteors, except for the brightest ones. NASA says that while this is obviously disappointing, another meteor shower, the Geminids, will offer an excellent viewing opportunity — free of any moonlight — in December. Dumbbell Nebula This month is also a wonderful chance to view the Dumbbell Nebula, also known as M27, which is a type known as a 'planetary nebula.' 'A nebula is a giant cloud of gas and dust in space, and planetary nebulas are produced by stars like our sun when they become old and nuclear fusion ceases inside them,' NASA explains. 'They blow off their outer layers, leaving behind a small, hot remnant called a white dwarf. The white dwarf produces lots of bright ultraviolet light that illuminates the nebula from the inside, as the expanding shell of gas absorbs the UV light and re-radiates it as visible light.' Nicknamed for its dumbbell-like shape, the Dumbbell Nebula shows in the night sky as a small, faint patch of light about a quarter of the width of the full moon when viewing through binoculars or a small telescope. The diagram below gives you an idea of where to look — or consult your favorite astronomy app to find its precise location. 'Here's hoping you get a chance to observe this glimpse into the future that awaits our sun about 5 billion years from now,' NASA says. 'It's part of a cycle that seeds the galaxy with the ingredients for new generations of stars and planets — perhaps even some not too different from our own.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store