
SoftBank aims to become leading 'artificial super intelligence' platform provider
SoftBank
Group CEO
Masayoshi Son
said on Friday that he wants the investment group to become the biggest platform provider for "
artificial super intelligence
" within the next 10 years.
"We want to become the organiser of the industry in the artificial super intelligence era," Son told shareholders at the group's annual shareholder meeting.
Son likened his aim to the position of dominant
technology platform providers
such as Microsoft, Amazon and Alphabet's Google, who benefit from a "winner takes all" dynamic.
At previous public appearances Son has described artificial super intelligence as exceeding human capabilities by a factor of 10,000.
The technology investment group has returned to making the aggressive investments that made Son's name and fortune, such as an early bet on
Alibaba
, but at times spectacularly backfired, like failed shared office provider WeWork.
SoftBank's mammoth investments related to artificial intelligence in 2025 include acquiring U.S. semiconductor design company Ampere for $6.5 billion and the underwriting of up to $40 billion of new investment in
ChatGPT
maker
OpenAI
.
Son said Softbank's total agreed investment in OpenAI now stood at $32 billion and that he expected OpenAI to eventually list publicly.
"I'm all in on OpenAI," Son said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
40 minutes ago
- Time of India
DeepSeek faces expulsion from app stores in Germany
HighlightsGermany's data protection commissioner Meike Kamp has requested that Apple Inc. and Google LLC remove the Chinese artificial intelligence startup DeepSeek from their app stores due to concerns over illegal data transfers to China. DeepSeek has been criticized for failing to provide adequate evidence that the personal data of German users is protected in China at a level comparable to that within the European Union. The technology company DeepSeek has faced scrutiny in multiple countries, with Italy already blocking its app and the Netherlands banning its use on government devices due to data security concerns. Germany 's data protection commissioner has asked Apple and Google to remove Chinese AI startup DeepSeek from their app stores in the country due to concerns about data protection, following a similar crackdown elsewhere. Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two US tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. Google said it had received the notice and was reviewing it. DeepSeek did not respond to a request for comment. Apple was not immediately available for comment. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI programme or uploaded files, on computers in China. "DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," Kamp said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added. DeepSeek shook the technology world in January with claims that it had developed an AI model to rival those from U.S. firms such as ChatGPT creator OpenAI at much lower cost. However, it has come under scrutiny in the United States and Europe for its data security policies. Italy blocked it from app stores there earlier this year, citing a lack of information on its use of personal data, while the Netherlands has banned it on government devices. Belgium has recommended government officials not to use DeepSeek. "Further analyses are underway to evaluate the approach to be followed," a government spokesperson said. In Spain, the consumer rights group OCU asked the government's data protection agency in February to investigate threats likely posed by DeepSeek, though no ban has come into force. US lawmakers plan to introduce a bill that would ban U.S. executive agencies from using any AI models developed in China. Reuters exclusively reported this week that DeepSeek is aiding China's military and intelligence operations.


India.com
an hour ago
- India.com
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...' Microsoft CEO Satya Nadella is calling on the industry to think seriously about the real impact of artificial intelligence (AI) especially the amount of energy it uses. This comes as AI is quickly changing the tech world. Speaking at Y Combinator's AI Startup School, he said that tech companies need to prove that AI is creating real value for people and society. 'If you're going to use a lot of energy, you need to have a good reason,' Nadella said. 'We can't just burn energy unless we are doing something useful with it.' His comments come as AI is praised for pushing innovation forward, but also criticized for using massive amounts of electricity and possibly making social gaps worse. For Microsoft, one of the biggest companies building AI tools, this is a big concern. A report in 2023 estimated that Microsoft used about 24 terawatt-hours of power in a year. That's as much electricity as a small country uses in the same time. But Nadella believes AI should be judged by how well it helps people in real life. 'The real test of AI,' he said, 'is whether it can make everyday life easier—like improving healthcare, speeding up education, or cutting down on boring paperwork.' He gave the example of hospitals in the U.S., where simple things like discharging a patient can take too long and cost too much. He said if AI is used for this task, it could save time, money, and energy. Microsoft's AI push comes with job losses Even as Microsoft have big plans for AI, the changes have not come without a cost, especially for workers. Over the past year, the company has laid off more than 6,000 employees. Microsoft said these job cuts were part of 'organisational changes' needed to stay strong in a fast-changing business world. That fast-changing world is being shaped by artificial intelligence and cloud computing. Microsoft, working closely with its AI partner OpenAI, is putting AI at the center of its future plans. But as the company shifts toward more automation and AI-driven tools, it's also reorganizing teams, often leading to people losing their jobs. Microsoft is reportedly preparing for another round of job cuts and this time in its Xbox division. The layoffs are expected to be part of a larger corporate reshuffle as the company wraps up its financial year. If these cuts go ahead, it would be Microsoft's fourth major layoff in just 18 months. The company is facing increasing pressure to boost profits, especially after spending USD 69 billion to acquire Activision Blizzard in 2023.


Mint
2 hours ago
- Mint
Why tech billionaires want bots to be your BFF
Next Story Tim Higgins , The Wall Street Journal In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new 'friend economy.' Illustration: Emil Lendof/WSJ, iStock. Gift this article Grok needs a reboot. Grok needs a reboot. The xAI chatbot apparently developed too many opinions that ran counter to the way the startup's founder, Elon Musk, sees the world. The recent announcement by Musk—though decried by some as '1984"-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it. Or, more weirdly put, a friendship—one that shares a similar tone and worldview. The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness. That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are 'on par only with interacting with another person." It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire's pet product. Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI. It's a virtuous cycle fueled with the tears of solitude that we should call the 'friend economy." That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that's the case, it's easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News. In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more. He sees AI as a solution and talks about how consumer applications will be personalized. 'I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do," he said during a May conference. Over at Microsoft, the tech company's head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation. 'We really want it to feel like you're talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values," he said during an April appearance on the Big Technology Podcast. Still, he added, Microsoft wants to impose boundaries that keep things safe. 'We don't really want to engage in any of the chaos," Suleyman said. 'The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument." With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you're having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use. The idea of a heartbroken man falling in love with his chatbot isn't new, of course. It was the plot of the 2013 film 'Her" starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI's own AI voice, which sounded eerily similar to Johansson's last year, triggered a public fight with the actress.) Still, we—a society that's only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them. The news of Musk's Grok reboot is an example of that. Grok and xAI were born, in part, out of Musk's concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a 'maximum truth-seeking AI." Yet when an influential X user known as Catturd lambasted Grok's answers in June, Musk took quick note. 'When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points," Catturd posted on June 18. 'I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become." Another X user asked Grok why it lied about Catturd. 'I aim to provide accurate, neutral responses based on available data," the chatbot responded on X. 'Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don't have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I'll clarify with evidence." Musk wasn't impressed. 'Your sourcing is terrible," Musk replied. 'Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week." He later said xAI would retrain the AI on data created with an updated version of Grok, 'which has advanced reasoning" that would be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors." After all, nobody wants a friend who is always spouting the wrong crazy stuff. Write to Tim Higgins at Topics You May Be Interested In Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.