logo
Self-charging robots pave way for never-ending revolt

Self-charging robots pave way for never-ending revolt

Digital Trends2 days ago
If you've always been one to scoff at the idea of a robot uprising, then this story out of China might give you pause for thought.
It's about what is apparently the first-ever humanoid robot that's able to change its own battery pack. Yes, you read that right — a humanoid robot that's able to realize when it's running low on juice, and then go through the process of swapping out its battery for a fully charged one. All by itself … without any human intervention.
The robot, called Walker S2, is built by Shenzhen-based Ubtech, so we know who to blame if those bots do ever take over.
Ubtech released a video (top) showing Walker S2 autonomously swapping out its battery, a process that will enable it to get back to work, whether that involves subjugating humans or hopefully something a little less alarming, like explaining meal times to newly arrived hotel guests.
Walker S2, which has been in development since 2015, is 64 inches tall (162 cm), tips the scales at 94.8 pounds (64 kg), and runs on a 48-volt lithium battery.
Each fully charged battery gives the robot enough power to walk for two hours or stand for four hours. When it puts a flat battery into the charger, it takes about 90 minutes for it to fully charge.
Currently, Ubtech's humanoid robot is still in the research and development stage, though it's also being tested in a range of commercial and industrial settings, as well as in education for teaching about robotics and AI.
The company's aim is to use its humanoid robot to enhance human capabilities and improve people's quality of life, particularly in areas like healthcare, education, and service industries, smoothly integrating it into human environments.
The technology powering humanoid robots has been making rapid advancements in the last few years, with major developments in AI helping to make them smarter than ever.
Tech companies in China, the U.S., and beyond are in a race to produce the most sophisticated robots that not only move in a human-like way, but think like a human, too, and clear progress is being made in terms of both physical movement and decision-making abilities.
But a full-on takeover? Well, any chance of that still feels like a ways off.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Search is readying the next generation for AI: Morning Brief
Google Search is readying the next generation for AI: Morning Brief

Yahoo

time6 minutes ago

  • Yahoo

Google Search is readying the next generation for AI: Morning Brief

In case there was any thought tech giants would ease up on AI spending sprees, Alphabet (GOOG, GOOGL) threw another $10 billion into the mix just to be sure. The company's stellar quarter — flashing strength in advertising and its cloud businesses — appeared to more than justify the amped up investment, which is now set to reach $85 billion this year. But the Google parent also succeeded in advancing another urgent mission: convincing investors it can transition its search empire into an AI-infused one. AI Overviews, Google's search product that summarizes key information on a topic or question, has grown from 1.5 billion monthly users to more than 2 billion, underscoring its strong adoption. Google executives have framed its enhanced search features as an evolution of the company's core business. Along with AI Mode — an expansion of Overviews — the company says the tools are driving people to search even more. Double-digit revenue growth in search suggests AI is expanding the market. "We like the integration of AI features (AI Overviews and AI Mode) within Google Search and view these additions as important in maintaining Google Search's relevance, especially with younger users, while opening up new generative AI monetization vectors for the firm," said Malik Ahmed Khan, equity analyst at Morningstar, in a note on Thursday. The knock on Google's AI approach is that in trying to fend off a new wave of AI-powered answer engines, like those from Perplexity and OpenAI, its reinvention will cannibalize search revenues. But Google's advertising business has proven resilient, even as AI adoption has grown. "Another stable quarter for Search results increases our confidence in the AI transition and should ease concerns on a potential revenue reset," said Bank of America analysts Justin Post and Nitin Bansal in a note Thursday. It also helps that Google's cloud business is a force of its own. Executives said the cloud unit now touts an annual revenue run-rate of more than $50 billion, which means questions over AI monetization don't have to be answered right away. Besides, Google's AI-powered rivals aiming to disrupt the web browser also have to become advertising powerhouses to take on the search giant's dominant position. Of course, Google's race to integrate AI into the search experience has broader implications for online publishers and the way people interact with the web. Google users who are shown an AI summary are less likely to click on links to other websites than users who do not see one, according to a recent Pew Research Center report. Google users who encountered an AI summary also rarely clicked on a link in the summary itself, the study found. The findings, which were first published in May and reposted this week with additional analysis, add weight to concerns that AI-powered answer engines will steer people's attention away from the businesses, subject experts, writers, and artists that rely on Search to send people their way. And on one level, that's Google's problem too. Tearing down a successful ad model to build out an ambitious AI-centered regime could ultimately diminish the broader internet and Google's financial standing. But the company's leaders have expressed confidence they can usher in a new age of search. Whether the ecosystem Google helped create will be sacrificed in the process isn't yet answerable in a tidy box. Early returns suggest perhaps not. Hamza Shaban is a reporter for Yahoo Finance covering markets and the economy. Follow Hamza on X @hshaban.

Trump's AI plan aims to cement tech ties to GOP
Trump's AI plan aims to cement tech ties to GOP

New York Post

time8 minutes ago

  • New York Post

Trump's AI plan aims to cement tech ties to GOP

At an artificial intelligence forum in Washington, DC, on Wednesday, Donald Trump gave his first speech detailing the White House's new AI strategy. The half-day event — co-hosted by AI Czar David Sacks' 'All In' Podcast and the Hill & Valley Forum — found Trump and key officials outlining how they want to deliver more 'winning' when it comes to America's AI dominance. It also showed how deeply Republicans have cemented an alliance with the tech community. Alongside Trump and members of his administration were notable private-sector figures like AMD CEO Lisa Su, Palantir CTO Shyam Sankar and NVIDIA CEO Jensen Huang — all speaking about how aligned they are with the White House's policies. Advertisement 5 Private-sector AI figures like AMD CEO Lisa Su, spoke at the event. She is pictured here with Senator Ted Cruz (R-Texas). Getty Images In a fireside chat with Sacks and his podcast co-hosts, Huang, who was recently granted approval to resume AI chip sales in China following a prolonged ban, solely credited Trump for enabling US leadership in artificial intelligence. When asked if the US had an advantage in the AI race, Huang — either genuine or genuflecting — said, 'America's unique advantage that no country possibly has is President Trump.' Huang also revealed another important fact during his address: He owns approximately 50 to 60 identical copies of his signature leather jacket. Advertisement 5 Nvidia CEO Jensen Huang (left), pictured with Secretary of the interior Doug Burgum, said he believes the US will win the AI race because of President Trump. AP Meanwhile, Secretary of the Interior Doug Burgum joined Secretary of Energy Chris Wright to highlight the administration's support for AI infrastructure, urging business leaders to ask for help securing energy resources for data centers and other projects. 'Please contact us,' Burgum said. 'We help people build projects.' The sentiment that Silicon Valley is aligned with America's interests was echoed by Trump, who said he sees 'a new spirit of patriotism and loyalty in Silicon Valley … we need companies to be all in for America.' Advertisement 5 Donald Trump, pictured with Chief Technology Officer Michael Kratsios, AI Czar David Sacks and White House Staff Secretary Will Scharf, signed multiple executive orders on AI Wednesday. AP He also promised a nation 'where innovators are rewarded' with streamlined regulations and significant investments in AI infrastructure. The White House's 28-page 'Winning the AI Race: America's AI Action Plan,' unveiled at the conference, outlines three pillars to secure US dominance in the industry: accelerating innovation by removing regulatory barriers, building infrastructure through expedited permits for data centers and semiconductor facilities, and promoting American AI standards globally while ensuring models are free from bias. This story is part of NYNext, an indispensable insider insight into the innovations, moonshots and political chess moves that matter most to NYC's power players (and those who aspire to be). Advertisement Trump administration officials told me that, while they're focused on helping big players like Nvidia win, they want all Americans to benefit. 5 Donald Trump gave his first speech detailing the White House's new AI strategy at the Hill & Valley conference. REUTERS Kelly Loeffler, head of the Small Business Administration, told me the AI plan will be broadly applied to all areas of government — and the economy. She said she has used AI to refine her department's loan underwriting program and is allowing small businesses to use their SBA loan to invest in AI software. Loeffler has been meeting with 'small business owners using artificial intelligence to level the playing field — building new business on the backs of AI,' she said. 5 Head of SBA Kelly Loeffler said the White House's AI plan will be broadly applied to all areas of government — and the economy. Getty Images for Hill & Valley Forum As to whether tech's alliance with MAGA will continue, private sector attendees told me they believe the answer is yes. The ongoing threats of a potential 'communist' — as the president referred to Democratic mayoral nominee Zohran Mamdani in the speech — in charge of New York City has been enough to keep some innovators aligned with Trump. Advertisement And Loeffler, who previously ran software company Bakkt, said she believes alliance is permanent since the two groups are ideologically aligned. 'Supporting free enterprise is something conservatives have always done and that lifts everyone up,' Loeffler told me. 'It shouldn't be a political issue, but it was because the Biden administration locked down innovation … the left has gone further towards socialism, which locks down innovation.' Send NYNext a tip: NYNextLydia@

Why Do Some AI Models Hide Information From Users?
Why Do Some AI Models Hide Information From Users?

Time Business News

time9 minutes ago

  • Time Business News

Why Do Some AI Models Hide Information From Users?

In today's fast-evolving AI landscape, questions around transparency, safety, and ethical use of AI models are growing louder. One particularly puzzling question stands out: Why do some AI models hide information from users? Building trust, maintaining compliance, and producing responsible innovation all depend on an understanding of this dynamic, which is not merely academic for an AI solutions or product engineering company. Using in-depth research, professional experiences, and the practical difficulties of large-scale AI deployment, this article will examine the causes of this behavior. AI is an effective instrument. It can help with decision-making, task automation, content creation, and even conversation replication. However, enormous power also carries a great deal of responsibility. The obligation at times includes intentionally hiding or denying users access to information. Let's look into the figures: Over 4.2 million requests were declined by GPT-based models for breaking safety rules, such as requests involving violence, hate speech, or self-harm, according to OpenAI's 2023 Transparency Report. Concerns about 'over-blocking' and its effect on user experience were raised by a Stanford study on large language models (LLMs), which found that more than 12% of filtered queries were not intrinsically harmful but were rather collected by overly aggressive filters. Research from the AI Incident Database shows that in 2022 alone, there were almost 30 cases where private, sensitive, or confidential information was inadvertently shared or made public by AI models. At its core, the goal of any AI model—especially large language models (LLMs)—is to assist, inform, and solve problems. But that doesn't always mean full transparency. Large-scale datasets, such as information from books, websites, forums, and more, are used to train AI models. This training data can contain harmful, misleading, or outright dangerous content. So AI models are designed to: Avoid sharing dangerous information like how to build weapons or commit crimes. like how to build weapons or commit crimes. Reject offensive content , including hate speech or harassment. , including hate speech or harassment. Protect privacy by refusing to share personal or sensitive data. by refusing to share personal or sensitive data. Comply with ethical standards, avoiding controversial or harmful topics. As an AI product engineering company, we often embed guardrails —automatic filters and safety protocols—into AI systems. They are not arbitrary; they are required to prevent misuse and follow rules. Expert Insight: In projects where we developed NLP models for legal tech, we had to implement multi-tiered moderation systems that auto-redacted sensitive terms—this is not over-caution; it's compliance in action. In AI, compliance is not optional. Companies building and deploying AI must align with local and international laws, including GDPR and CCPA —privacy regulations requiring data protection. —privacy regulations requiring data protection. COPPA — Preventing AI from sharing adult content with children. — AI from sharing adult content with children. HIPAA—Safeguarding health data in medical applications. These legal boundaries shape how much an AI model can reveal. For example, a model trained in healthcare diagnostics cannot disclose medical information unless authorized. This is where AI solutions companies come in—designing systems that comply with complex regulatory environments. Some users attempt to jailbreak AI models to make them say or do things they shouldn't. To counter this: Models may refuse to answer certain prompts . . Deny requests that seem manipulative. that seem manipulative. Mask internal logic to avoid reverse engineering. As AI becomes more integrated into cybersecurity, finance, and policy applications, hiding certain operational details becomes a security feature, not a bug. Although the intentions are usually good, there are consequences. Many users, including academic researchers, find that AI models Avoid legitimate topics under the guise of safety. under the guise of safety. Respond vaguely , creating unproductive interactions. , creating unproductive interactions. Fail to explain 'why' an answer is withheld. For educators or policymakers relying on AI for insight, this lack of transparency can create friction and reduce trust in the technology. Industry Observation: In an AI-driven content analysis project for an edtech firm, over-filtering prevented the model from discussing important historical events. We had to fine-tune it carefully to balance educational value and safety. If an AI model refuses to respond to a certain type of question consistently, users may begin to suspect: Bias in training data Censorship Opaque decision-making This fuels skepticism about how the model is built, trained, and governed. For AI solutions companies, this is where transparent communication and explainable AI (XAI) become crucial. So, how can we make AI more transparent while keeping users safe? Models should not just say, 'I can't answer that.' They should explain why, with context. For instance: 'This question may involve sensitive information related to personal identity. To protect user privacy, I've been trained to avoid this topic.' This builds trust and makes AI systems feel more cooperative rather than authoritarian. Instead of blanket bans, modern models use multi-level safety filters. Some emerging techniques include: SOFAI multi-agent architecture : Where different AI components manage safety, reasoning, and user intent independently. : Where different AI components manage safety, reasoning, and user intent independently. Adaptive filtering : That considers user role (researcher vs. child) and intent. : That considers user role (researcher vs. child) and intent. Deliberate reasoning engines: They use ethical frameworks to decide what can be shared. As an AI product engineering company, incorporating these layers is vital in product design—especially in domains like finance, defense, or education. AI developers and companies must communicate. What data was used for training What filtering rules exist What users can (and cannot) expect Transparency helps policymakers, educators, and researchers feel confident using AI tools in meaningful ways. Recent work, like DeepSeek's efficiency breakthrough, shows how rethinking distributed systems for AI can improve not just speed but transparency. Mixture-of-Experts (MoE) architectures were used by DeepSeek to cut down on pointless communication. This also means less noise in the model's decision-making path—making its logic easier to audit and interpret. Traditional systems often fail because they try to fit AI workloads into outdated paradigms. Future models should focus on: Asynchronous communication Hierarchical attention patterns Energy-efficient design These changes improve not just performance but also trustworthiness and reliability, key to information transparency. If you're in academia, policy, or industry, understanding the 'why' behind AI information hiding allows you to: Ask better questions Choose the right AI partner Design ethical systems Build user trust As an AI solutions company, we integrate explainability, compliance, and ethical design into every AI project. Whether it's conversational agents, AI assistants, or complex analytics engines, we help organizations build models that are powerful, compliant, and responsible. In conclusion, AI models hide information for safety, compliance, and security reasons. However, trust can only be established through transparency, clear explainability, and a strong commitment to ethical engineering. Whether you're building products, crafting policy, or doing research, understanding this behavior can help you make smarter decisions and leverage AI more effectively. If you're a policymaker, researcher, or business leader looking to harness responsible AI, partner with an AI product engineering company that prioritizes transparency, compliance, and performance. Get in touch with our AI solutions experts, and let's build smarter, safer AI together. Transform your ideas into intelligent, compliant AI solutions—today. TIME BUSINESS NEWS

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store