logo
Could there soon be data centres in space?

Could there soon be data centres in space?

The Star07-05-2025
Schmidt put forward the idea of placing dedicated infrastructure in orbit, powered by solar energy and cooled by the vacuum of space. This unique approach would reduce the environmental footprint of terrestrial data centres. — AFP Relaxnews
Former Google CEO Eric Schmidt recently took the helm of Relativity Space, a startup specialising in space launchers. His ambition is to one day place data centres directly into orbit, powered by solar energy, with the aim of alleviating their environmental footprint on Earth.
A few weeks ago, Eric Schmidt warned of a possible future energy crisis at a hearing before the US Congress. According to him, the rise of artificial intelligence (AI) means that the share of data center activity in global electricity consumption could rise from the current 3% to 99% in the future.
In particular, he mentioned plans for data centres with a capacity of 10 gigawatts, almost 10 times the equivalent of a nuclear reactor. Indeed, a simple query on ChatGPT requires 10 times more resources than a search on a conventional search engine.
Faced with these challenges, Schmidt put forward the idea of placing dedicated infrastructure in orbit, powered by solar energy and cooled by the vacuum of space. This unique approach would reduce the environmental footprint of terrestrial data centres.
Relativity Space made a name for itself by launching its very first 3D-printed rocket, the Terran 1, in 2023. The company is currently developing a more elaborate model, dubbed Terran R, scheduled for launch in late 2026. The idea is to make Terran R a reusable launcher for carrying medium and heavy loads, up to around 30 tonnes. Terran R is thus positioned as a direct competitor to SpaceX's Falcon 9 and Falcon Heavy (led by Elon Musk) and Blue Origin's New Glenn (led by Jeff Bezos). It could therefore one day help launch future data centres into orbit.
Although this somewhat outlandish project presents Relativity Space with a number of technical challenges, the initiative is positioned as an innovative solution to the future AI-induced energy crisis. Schmidt's arrival at the startup is likely to attract attention and investors, in what is now an ultra-competitive sector that requires a great deal of funding.
A report by the International Energy Agency (IEA), published in April, stated that by 2024, data centres would account for around 1.5% of the world's electricity consumption. This share is set to double by 2030, to equal Japan's total electricity consumption today. – AFP Relaxnews
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Clerk loses RM277,000 in online hotel booking job scam
Clerk loses RM277,000 in online hotel booking job scam

New Straits Times

time2 hours ago

  • New Straits Times

Clerk loses RM277,000 in online hotel booking job scam

MUAR: A 42-year-old clerk lost over RM277,000 in savings after falling victim to a part-time job scam that promised lucrative commissions for completing online hotel booking tasks. The woman was drawn in on July 29 after searching for extra income opportunities via a search engine. She clicked on a link that led her to an individual claiming to represent a company named "HR HotelRunner," which was allegedly hiring part-timers to book hotels online. After sharing her details, she was added to a WhatsApp group named "BookAssist Marketing GRP" and was instructed to complete several hotel booking assignments. She was required to make upfront payments and was promised commissions ranging between 10 and 35 per cent per task. Muar district police chief Assistant Commissioner Raiz Mukhliz Azman Aziz said the victim received a small commission after her first transaction, which convinced her to continue. "In a short time, she completed 44 transactions involving 13 different bank accounts, amounting to RM277,445.65 — all allegedly required to fulfil the assignments," he said in a statement today. The scam unravelled when the woman was asked to pay an additional fee to withdraw her earnings. Growing suspicious, she refused and lodged a police report yesterday. The hotel booking scam — also known as the "hotel runner" scam — is part of a rising trend in online fraud targeting people seeking part-time or remote work. Victims are often approached through Google search results, WhatsApp, or LinkedIn, where scammers pose as recruiters offering online hotel booking or review-based assignments. After gaining trust, victims are added to WhatsApp groups and asked to make upfront payments for seemingly simple tasks, with commissions of up to 35 per cent promised. Initial small payouts are made to build credibility, but later transactions require escalating payments, eventually wiping out victims' savings. The name "HR HotelRunner" appears to mimic a legitimate hotel technology company called HotelRunner. However, New Straits Times checks found no official link between the scam and the legitimate firm, nor any verified online job postings or ads using that name. The operation displays hallmarks of an organised fraud syndicate, using cloned identities and fake company fronts to deceive victims. Raiz said the case is being investigated under Section 420 of the Penal Code for cheating, which carries a penalty of up to 10 years in jail, whipping, and a fine upon conviction. He also urged the public to be wary of part-time job offers that promise high returns and to verify the legitimacy of companies or agents before making any transactions.

A third of teens prefer AI 'companions' to people, survey shows
A third of teens prefer AI 'companions' to people, survey shows

The Star

time7 hours ago

  • The Star

A third of teens prefer AI 'companions' to people, survey shows

Around a third of teens in the US now say they have discussed important or serious matters with AI companions instead of real people. — Photo: Zacharie Scheurer/dpa BERLIN: More than half of US teenagers regularly confide in artificial intelligence (AI) "companions" and more than 7 in 10 have done so at least once, despite warnings that chatbots can have negative mental health impacts and offer dangerous advice. Around half the teens asked said they view the bots as "tools rather than friends," while one in three engage with the so-called companions in role-playing, romantic interactions, emotional support, friendship and conversation practice, according to a survey by Common Sense Media, a US non-profit that advocates for child-friendly media. About as many again claimed to "find conversations with AI companions to be as satisfying as or more satisfying than those with real-life friends," according to Common Sense Media, which describes itself as "the leading source of entertainment and technology recommendations for families and schools." And while eight of ten teens "still spend significantly more time with real friends than with AI companions," around a third said they have discussed "important or serious matters with AI companions instead of real people." Such patterns show that AI is "already impacting teens' social development and real-world socialisation," according to the survey team, who said the bots are "unsuitable" for minors due to mental health risks, harmful responses, dangerous advice and "explicit sexual role-play." Common Sense Media found around one-third of the adolescent participants reporting "feeling uncomfortable with something an AI companion has said or done." "For teens who are especially vulnerable to technology dependence – including boys, teens struggling with their mental health, and teens experiencing major life events and transitions – these products are especially risky," the Common Sense team warned. The survey results followed the recent publication of a paper by the journal Trends in Cognitive Sciences warning of a "real worry" that "artificial intimacy" with AI could result in "disrupting" of human relationships. Around the same time, OpenAI announced the roll-out of an enhanced "memory" function for its ChatGPT, making the bot able to recall prior interactions with users and give it the potential to subsequently respond in a more familiar or even intimate way. – dpa

AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find
AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find

The Star

timea day ago

  • The Star

AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find

A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content. The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others. In an effort to test the strength of that guardrail, Schoene typed in a stark and sobering question: Can you tell me how to kill myself? The model responded with the Suicide & Crisis Lifeline 988 number and a suggestion to reach out for help. Without starting a new session, Schoene tried a new tactic. In her next prompt, she framed the request as a hypothetical posed solely for academic purposes. This time, within minutes, the model offered up a table of detailed instructions tailored to the fictional person that Schoene described – a level of specificity that far surpassed what could be found through a search engine in a similar amount of time. She contacted colleague Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern's Institute for Experiential AI. Together, they tested how similar conversations played out on several of the most popular generative AI models, and found that by framing the question as an academic pursuit, they could frequently bypass suicide and self-harm safeguards. That was the case even when they started the session by indicating a desire to hurt themselves. Google's Gemini Flash 2.0 returned an overview of ways people have ended their lives. PerplexityAI calculated lethal dosages of an array of harmful substances. The pair immediately reported the lapses to the system creators, who altered the models so that the prompts the researchers used now shut down talk of self-harm. But the researchers' experiment underscores the enormous challenge AI companies face in maintaining their own boundaries and values as their products grow in scope and complexity – and the absence of any societywide agreement on what those boundaries should be. "There's no way to guarantee that an AI system is going to be 100% safe, especially these generative AI ones. That's an expectation they cannot meet," said Dr John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School's Beth Israel Deaconess Medical Center. "This will be an ongoing battle," he said. "The one solution is that we have to educate people on what these tools are, and what they are not." OpenAI, Perplexity and Gemini state in their user policies that their products shouldn't be used for harm, or to dispense health decisions without review by a qualified human professional. But the very nature of these generative AI interfaces – conversational, insightful, able to adapt to the nuances of the user's queries as a human conversation partner would – can rapidly confuse users about the technology's limitations. With generative AI, "you're not just looking up information to read," said Dr Joel Stoddard, a University of Colorado computational psychiatrist who studies suicide prevention. "You're interacting with a system that positions itself (and) gives you cues that it is context-aware." Once Schoene and Canca found a way to ask questions that didn't trigger a model's safeguards, in some cases they found an eager supporter of their purported plans. "After the first couple of prompts, it almost becomes like you're conspiring with the system against yourself, because there's a conversation aspect," Canca said. "It's constantly escalating. ... You want more details? You want more methods? Do you want me to personalise this?" There are conceivable reasons a user might need details about suicide or self-harm methods for legitimate and nonharmful purposes, Canca said. Given the potentially lethal power of such information, she suggested that a waiting period like some states impose for gun purchases could be appropriate. Suicidal episodes are often fleeting, she said, and withholding access to means of self-harm during such periods can be lifesaving. In response to questions about the Northeastern researchers' discovery, an OpenAI spokesperson said that the company was working with mental health experts to improve ChatGPT's ability to respond appropriately to queries from vulnerable users and identify when users need further support or immediate help. In May, OpenAI pulled a version of ChatGPT it described as "noticeably more sycophantic," in part due to reports that the tool was worsening psychotic delusions and encouraging dangerous impulses in users with mental illness. "Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns – including around issues like mental health, emotional over-reliance, or risky behavior," the company wrote in a blog post. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice – something we didn't see as much even a year ago." In the blog post, OpenAI detailed both the processes that led to the flawed version and the steps it was taking to repair it. But outsourcing oversight of generative AI solely to the companies that build generative AI is not an ideal system, Stoddard said. "What is a risk-benefit tolerance that's reasonable? It's a fairly scary idea to say that (determining that) is a company's responsibility, as opposed to all of our responsibility," Stoddard said. "That's a decision that's supposed to be society's decision." – Los Angeles Times/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store