
Hardware, Software, Meet Wetware: A Computer With 800,000 Human Neurons
The world's first 'code-deployable' biological computer is now for sale. The Cortical Labs CL1 costs $35,000 and has 800,000 human brain cells living and growing in a nutrient solution on a silicon chip. Computer scientists can deploy computer code directly to these neurons, which have been integrated into a 'biOS' or Biological Intelligence Operating System with what the company says is a mixture of hard silicon and soft tissue.
The goal, according to the companies' founder? Smarter AI that drops some of the A and adds more of the I. Maybe, eventually, smarter brains than the ones we currently walk around with.
'The only machine or the only thing that we know of that actually has true intelligence is the brain,' founder and CEO Hon Weng Chong told me when I interviewed him five years ago, while he was still using mice neurons. 'So we said, let's start with the basic building structure, the building blocks being neurons, and let's build our way up and maybe we'll get there along the way.'
Our human brains have neurons connected together in hierarchies, and from that emerges intelligence and consciousness, he adds. This approach is similar to neuromorphic computing architectures, which attempt to mimic biological brains with silicon-based hardware, but of course different in that neuromorphic chips do not typically use actual living brain cells.
Cortical Labs, based in Australia, says scientists can solve today's most difficult problems with their biological computers, which they say are self-programming and infinitely flexible.
A key difference between biological computers and silicon-based chips, of course, is that biological computers last even less time. The neurons that ship with your CL1 will live for 'up to six months,' at which point you'll likely have to invest in a refresh or refurbishment which provides new neurons for continued compute. And yes, biological computers need food and water and nutrients, all of which are supplied onboard via a life-support system the keeps them at optimum temperature. Plus, it filters out waste byproducts of living human cells: the kind of work kidneys might do in a full living organism.
A Cortical Labs chip under a high-powered electron microscope. You can see tight connections between ... More neurons and the silicon substrate, the company says.
In some ways the CL1 is more like a space ship than a computer, because it's a self-contained life support system that requires few external inputs. A key difference: the need for external power.
From the outside, though, you treat the CL1 as a typical computer.
You can plug in USB devices, cameras, even actuators if you want your CL1 to control a physical system. (Which, frankly, human neurons are typically pretty good at.) And there's a touchscreen so you can see system status or view live data.
Five years ago, Cortical Lab's then-CTO Andy Kitchen told me they were deploying systems with tens of thousands of neurons to hundreds of thousands of neurons, but that their roadmap included 'scaling that up to millions of neurons." Now Cortical Labs sees their biological computers growing to hundreds of millions of cells, and with different technologies, billion or trillion-cell levels.
However, there's not a direct one-to-one equivalent with neuromorphic neurons in a silicon-based system, he added. Biological neurons are much more powerful, he says.
Interestingly, communicating with physical human neurons in a biological computer is vastly different than writing computer code to an artificial computer.
'The premier way would be to describe your task somehow, probably through some sort of very high-level language, and then we would turn that into a stimulus sequence which would shape biological behavior to fit your specification,' Kitchen told me.
Part of the difference is how to encode and communicate the problem, and part of the difference is that the CL1 neurons, like the ones in your brain right now, have some plasticity: they can essentially reprogram themselves for different tasks. Essentially, the neurons learn how to solve your problem, just like you learn how to do new things.
You won't likely see CL1 systems in general use anytime soon: currently, the targeted customers are in medical fields like drug discovery and disease modeling, says IEEE Spectrum. There's the added value that scientists can perform experiments on a little synthetic brain as well.
If all of this seems on the edge of creepy, or even right over, that's likely because it is. CL1 says they don't do any animal testing, although they did start with mouse neurons, and they say that the human brain cells in their biological computers are lab-grown. But clearly the first human neurons came from somewhere.
Cortical Labs says customers have to get 'ethical approval' to general cell lines, and require buyers to have proper facilities to maintain the biological chips. What exactly that means, however, is unclear.
Soon we may see physical system in the world, like humanoid robots, with partially organic components to their brains.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
14 minutes ago
- Gizmodo
Humans Are Starting to Talk More Like ChatGPT, Study Claims
For better or worse, the rise of ChatGPT as a writing tool, search engine, or conversational buddy has considerably changed how we communicate with each other and with technology. At the same time, ChatGPT's widespread use has also sparked numerous online debates about whether it's possible to spot AI-created content by looking at certain cues, like the em dash. But new research suggests that such AI cues might become increasingly harder to pick out—because we're starting to speak more like ChatGPT, and not the other way around. Researchers at the Max Planck Institute for Human Development, Germany, found that in the 18 months since ChatGPT's release, so-called 'GPT words' noticeably increased in frequency among human users. Previous research had found that ChatGPT influenced written communication for humans, but the researchers were curious as to whether the proliferation of AI impacted how we spoke. For the study, the researchers uploaded millions of pages of e-mails, essays, academic papers, and news stories to ChatGPT, then prompted the AI to 'polish' the text. Then they identified several words that ChatGPT seemed to favor, such as 'delve,' 'realm,' or 'meticulous'—dubbing them 'GPT words.' Finally, they tracked the frequency of GPT words in over 360,000 YouTube videos and 771,000 podcast episodes from before and after ChatGPT's release. The paper, posted to the preprint server arXiv, has not yet been peer reviewed. Even with controls to account for synonyms or scripted content, the researchers found that indeed, GPT words have risen to prominence in spoken English. It appears that a cultural feedback loop of sorts has emerged between English-speaking humans and AI. 'The patterns that are stored in AI technology seem to be transmitting back to the human mind,' study co-author Levin Brinkmann told Scientific American. 'It's natural for humans to imitate one another, but we don't imitate everyone around us equally,' he added. 'We're more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.' An increasing number of people are looking to AI as a cultural authority, wherein 'machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture,' the authors wrote in the study. ''Delve' is only the tip of the iceberg,' Brinkmann noted to the Verge. Other frequently used GPT words included 'underscore,' 'comprehend,' 'bolster,' 'boast,' 'swift,' 'inquiry,' 'meticulous,' and 'groundbreak.' The study offers some provocative food for thought, but there are some caveats worth noting. First, the researchers analyzed data from a specific set of GPT models: GPT-4, GPT-3.5-turbo, GPT-4-turbo, and GPT-4o. This anchors the study to these specific versions of ChatGPT. OpenAI will undoubtedly introduce new models over the coming months and years, and those upcoming versions are likely to exhibit new forms of language use and word preference. As a result, this study could become dated rather quickly. It's also not clear if ChatGPT truly has a significant influence on more casual forms of verbal language, especially given that the researchers pulled a considerable amount of data from academic sources. What's more, language and word use evolve over time owing to a wide variety of factors; while ChatGPT may be contributing in some small way to changes in the words we use, it's important to point out the many other sources in society and culture that contribute to language shift. AI is entering our subconscious, informing the linguistic patterns that allow us to communicate with one another. What that means for us humans, we'll have to wait to see. But in the meantime, experts caution that it'd be smart for us to keep a close eye on AI's influence on culture, communication, and beyond.
Yahoo
25 minutes ago
- Yahoo
How High Can Nvidia Stock Go as Jensen Huang Heads to China?
Nvidia (NVDA) stock is back in the news, this time mostly for pleasant reasons. Last week, the company's market capitalization reached $4 trillion as it surpassed Microsoft (MSFT) and Apple (AAPL) to achieve that milestone. It's been a dream rally for NVDA. What makes the feat even more impressive is that Nvidia achieved it despite losing billions of dollars in quarterly revenue in China, due to the U.S. tightening restrictions on exports of high-end artificial intelligence (AI) chips to the country. Shopify Stock is a Bargain - How to Make a 3.2% One-Month Yield with SHOP Tariffs, Inflation and Other Key Things to Watch this Week This Analyst Just Doubled His Price Target on AMD Stock Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. Meanwhile, Nvidia CEO Jensen Huang is headed to China and is set to hold a media briefing on Wednesday, July 16. Huang met President Donald Trump ahead of his visit, which would be his second this year. A bipartisan letter from U.S. senators has meanwhile requested the Nvidia CEO to 'refrain from meeting with representatives of any companies that are working with the PRC's military or intelligence establishment, are named on the Entity List, or are suspected to have engaged in activities that undermine export controls.' China was once Nvidia's second-biggest market and during the fiscal Q1 2026 earnings call in May, CFO Collete Kress said, 'Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide.' Nvidia's China business has come to a screeching halt, and at the GTC Paris Financial Analyst Q&A Event last month, Huang stressed that the company is assuming zero revenues from China AI chip sales currently and called upon the analyst community to also do so in their models. Huang alluded to chip export controls being a bargaining ploy in U.S.-China trade talks while terming any upside from that eventuality as a 'bonus.' Huang said that Chinese competitors are a few years behind Nvidia, and NVDA's AI chips have nearly 5 times more efficiency than Huawei's. He, however, said, China can capitalize on lower power prices compared to the U.S. and use more domestic chips and increase the number of data centers that it is building. Huang has built a case for allowing AI chip exports to China and has reiterated multiple times that the export ban is not helping and, if anything, it is simply spurring innovation in China. Meanwhile, the AI rally has gotten new legs from a hiring and acquisition spree in Silicon Valley. Meta Platforms (META), for instance, has taken a 49% stake in Scale AI and also acquired AI voice startup Play AI. Alphabet (GOOG) has also announced a $2.4 billion deal with coding startup Windsurf, which is more of a talent requirement drive and will see the startup's CEO and other key executives join the company. Apple was also rumored to be considering acquiring AI startup Perplexity. With hyperscalers showing no signs of slowing down their AI capex, Nvidia's cash registers could keep ringing in the medium term. Notably, AI inference demand could increase significantly and should more than offset any slowdown in demand for training models. There are also early encouraging signs of AI monetization with OpenAI and Anthropic reaching annualized revenues of $10 billion and $3 billion, respectively. For context, Anthropic's annualized revenues tripled in five months while the metric nearly doubled for OpenAI in six months. Sovereign AI is yet another growth driver for Nvidia, as, given the growing importance of AI, major countries want to have more control over the technology. Huang has been globe-trotting for a reason and has been pitching sovereign AI on his trips to the Middle East and the European Union this year. Another trigger for Nvidia could be a potential easing of China export restrictions, or the company coming up with a new chip that it can export under the current set of rules. While to quote Huang, that is a 'bonus,' it would be a pretty fat one, and help add billions more to Nvidia's burgeoning revenues and profits. Loop Capital has a Street-high target of $250 on Nvidia, which implies upside of more than 50% from these levels. Incidentally, Nvidia's market cap will rise above $6 trillion if it were to hit the $250 price level. Nvidia's growth engine is not expected to slow down anytime soon, and analysts are modelling a 53.2% year-over-year rise in revenues in the current fiscal year. While the revenues are expected to rise by a more grounded 25.7% in the next fiscal year, it is still higher than what analysts expect from other Magnificent 7 peers. Moreover, a resumption of China business could mean that Nvidia ends up generating a lot more revenue than the current estimates. From a valuation perspective, Nvidia trades at a forward price-earnings (P/E) multiple of nearly 41x, which is the second highest among the Magnificent 7 peers after Tesla (TSLA). However, it has the second-lowest P/E-to-growth (PEG) multiple of 1.45x after Alphabet, whose PEG stands at 1.26x. Nvidia's valuations don't seem exorbitant, even if they are not mouthwatering. I continue to believe that NVDA has the momentum with it, and any positive update on the China chip business could help drive the stock even higher. A $200 price level does not look too demanding for Nvidia if the company can manage to get concessions for chip exports to China. On the date of publication, Mohit Oberoi had a position in: NVDA, TSLA, AAPL, MSFT, GOOG, META. All information and data in this article is solely for informational purposes. This article was originally published on Sign in to access your portfolio


Bloomberg
44 minutes ago
- Bloomberg
Allegra Stratton: Hot Under the White Collar As AI Cuts New Jobs
Who would have thought that, on the morning of the Ladies' Wimbledon final, it would be two former line judges speaking for the millions, maybe even billions, of workers worldwide who are worried about their jobs. Talking to The Telegraph about their artful line judgments being replaced by a flat AI yelp, they echoed the warnings from big tech bosses about coming artificial intelligence-induced job losses. In May, Anthropic's Dario Amodei said he believed that AI will lead to a white collar jobs apocalypse and told Axios that this could mean unemployment of up to 20%. A fortnight ago Ford's CEO went even further – according to him, half of all white collar jobs could be knocked out by AI.