Latest news with #Matrix-style
Yahoo
14-06-2025
- Yahoo
ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo
When you buy through links on our articles, Future and its syndication partners may earn a commission. ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis." ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly. The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference. The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down. Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response. AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end. ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior." But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service. GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess. ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.
Yahoo
09-03-2025
- Science
- Yahoo
The world's first 'body in a box' biological computer costs $35,000 and looks both cool as hell plus creepy as heck
When you buy through links on our articles, Future and its syndication partners may earn a commission. Here's one for you: when is a 'body in a box' not as macabre as it sounds? Simple—when it's a tech startup. Wait! Put the turn-of-the-millennium trench coat and sunglasses combo down! Let me explain. The CL1 is described as "the world's first code deployable biological computer" according to the splashy website, incorporating human brain cells in order to send and receive electrical signals (via The Independent). These cells hang out on the surface of the computer's silicon chip, and the machine's Biological Intelligence Operating System (or biOS for short—cute), allows users to wrangle the neurons for a variety of computing tasks. Organic hardware like this for research purposes isn't new—for just one example, FinalSpark's Neuroplatform began offering rentable 'minibrains' last year. The neurons central to the CL1 are lab-grown, cultivated inside a nutrient rich solution and then kept alive thanks to a tightly temperature controlled environment working alongside an internal life support system. Under favourable conditions, the cells can survive for up to six months. Hence, the project's chief scientific officer Brett Kagan pitching it "like a body in a box." Should you be so inclined to pick up your own surprisingly fleshy, short-lived computer, you can do so from June…for $35,000. Now, I know what you're thinking—not because you're actually living life in a Matrix-style pod, but purely because I'm asking the same question: Why? First, a smidge more background on this brain box, which is the latest project from Cortical Labs, and was unveiled this week at Mobile World Congress in Barcelona. We've covered this Melbourne-based company before, with highlights including that time their team coaxed brain cells in a petri dish to learn Pong faster than AI. That lattermost experiment is the CL1's great grandparent, with continued scientific interest fostered by the hope that 'wetware' like lab-grown brain cells could give robotics and AI a serious leg-up. Whereas traditional AI can play something like the theatre kid favourite of 'yes, and' but totally lacks any true understanding of context, the lab-grown neurons could potentially learn and adapt. Furthermore, the lab-grown cells are apparently much more energy efficient compared to the power demands of AI using more traditional, non-biological computers. Turns out the old noggin cells are still showing that new-fangled silicon a trick or two. Who would have thought? However, there's no avoiding the question of ethics: what are these brain cells experiencing, and is it anything like sentience—or suffering? Perhaps my questions verge on the hyperbolic, but my own osseous brain box can do nothing but wonder. Best gaming PC: The top pre-built gaming laptop: Great devices for mobile gaming.
Yahoo
30-01-2025
- Entertainment
- Yahoo
Will Smith releases single with Big Sean to feature on first album in 20 years
Will Smith has released a new single featuring Big Sean that will feature on his first full studio album in 20 years. The track, Beautiful Scars, featuring OBanga, premiered on YouTube at 5pm GMT on Thursday in a video where Smith and Big Sean re-enacted a scene from hit film The Matrix. In the science fiction movie, character Neo is given an option to take a blue pill, and return to his life as normal, or a red pill that will reveal the truth of the world in which he lives. In the music video, Smith takes both pills after Big Sean gives him the option to either take the blue pill, and move on with his life, or to take the red pill and go back in time to star in the Wachowskis sisters' film. Donning Matrix-style sunglasses, Smith raps: 'Fly as a eagle, fresh out of Philly, yeah I still rep the city.' He is also seen practising karate with Big Sean. Smith's forthcoming album, Based On A True Story, will be released in March 2025 – two decades on from his last studio album, Lost And Found. He released his debut solo album Big Willie Smith in 1997 and had UK chart success with the song Men In Black, which accompanied the film of the same name and peaked at number one. Across his career, he has starred in dozens of films and TV series and is also known for rapping the catchy theme song to US sitcom The Fresh Prince Of Bel Air. His latest single follows on from the release of Tantrum with Joyner Lucas, You Can Make It, featuring Fridayy and Sunday Service Choir, and Work Of Art, which was a collaboration between rapper Russ, featuring his son Jaden. All of these tracks will feature on the new album.