logo
China's humanoid robots generate more soccer excitement than their human counterparts

China's humanoid robots generate more soccer excitement than their human counterparts

BEIJING (AP) — While China's men's soccer team hasn't generated much excitement in recent years, humanoid robot teams have won over fans in Beijing based more on the AI technology involved than any athletic prowess shown.
Four teams of humanoid robots faced off in fully autonomous 3-on-3 soccer matches powered entirely by artificial intelligence on Saturday night in China's capital in what was touted as a first in China and a preview for the upcoming World Humanoid Robot Games, set to take place in Beijing.
According to the organizers, a key aspect of the match was that all the participating robots operated fully autonomously using AI-driven strategies without any human intervention or supervision.
Equipped with advanced visual sensors, the robots were able to identify the ball and navigate the field with agility
They were also designed to stand up on their own after falling. However, during the match several still had to be carried off the field on stretchers by staff, adding to the realism of the experience.
China is stepping up efforts to develop AI-powered humanoid robots, using sports competitions like marathons, boxing, and football as a real-world proving ground.
Cheng Hao, founder and CEO of Booster Robotics, the company that supplied the robot players, said sports competitions offer the ideal testing ground for humanoid robots, helping to accelerate the development of both algorithms and integrated hardware-software systems.
He also emphasized safety as a core concern in the application of humanoid robots.
'In the future, we may arrange for robots to play football with humans. That means we must ensure the robots are completely safe,' Cheng said. 'For example, a robot and a human could play a match where winning doesn't matter, but real offensive and defensive interactions take place. That would help audiences build trust and understand that robots are safe.'
Booster Robotics provided the hardware for all four university teams, while each school's research team developed and embedded their own algorithms for perception, decision-making, player formations, and passing strategies—including variables such as speed, force, and direction, according to Cheng.
In the final match, Tsinghua University's THU Robotics defeated the China Agricultural University's Mountain Sea team with a score of 5–3 to win the championship.
Mr. Wu, a supporter of Tsinghua, celebrated their victory while also praising the competition.
'They (THU) did really well,' he said. 'But the Mountain Sea team (of Agricultural University) was also impressive. They brought a lot of surprises.'
China's men have made only one World Cup appearance and have already been knocked out of next years' competition in Canada, Mexico and the United States.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The rise of AI in CEO communications—and the credibility threat it poses
The rise of AI in CEO communications—and the credibility threat it poses

Fast Company

time20 minutes ago

  • Fast Company

The rise of AI in CEO communications—and the credibility threat it poses

CEOs have become more than just corporate leaders—they're among the most valuable assets on the balance sheet. Great leadership can drive billions in market cap by shaping narratives and galvanizing stakeholders. But what happens when the communication tools they use to build credibility start to erode it? We're entering a new era in CEO communications, one where human messages increasingly filter through the lens of AI. Analysts and investors have long leaned on AI-powered language models and sentiment analysis to dissect earnings calls, parsing executive tone, word choice, and delivery for signals on strategy, risk, or future performance. Now, CEOs and their teams are flipping the script—crafting messages with the help of generative AI to appeal to the very same systems analyzing them. It's a feedback loop of machines talking to machines. And while the tech arms race might make earnings calls look polished and sentiment scores spike, it also risks creating a sentiment gap. In the end, credibility is still the most valuable currency in leadership—and AI can't replace that. The CEO Premium Meets the AI Arms Race Corporate valuation has always been about more than just numbers. Investors have baked intangibles like brand equity, leadership narratives, and cultural impact into their models. As NYU finance professor Aswath Damodaran puts it, valuation is as much about a company's story as it is about spreadsheets. The CEO's job is to integrate those stories with their strategies. Jensen Huang didn't make Nvidia a trillion-dollar company because of flawless financial execution—he did it by selling a vision of AI as the engine of the future, powering everything from healthcare to climate solutions. That's the CEO premium in action: the ability to turn a strategic story into market-moving value. But here's what no one's saying out loud: when that story is over-engineered with AI, something critical is lost. Consider this: Bank of America's S&P 500 corporate sentiment tracker, based on an analysis of thousands of earnings transcripts, hit an all-time high earlier this year, even as analysts lowered growth expectations for 2025. The disconnect is stark. While executives are optimizing their tone and language to look and sound bullish, it's masking underlying realities. We're looking at a sentiment bubble, where polished communications are designed to impress algorithms but are creating distance from actual performance. The result? A risk to long-term stakeholder confidence and broader market integrity. The Credibility Gap is Real–and Risky AI-powered communications is an incredible asset. It can help executives sharpen their messages, anticipate audience reactions, and streamline delivery. But when it starts to obscure reality—or worse, is used as a veil—it risks blowing up the most important thing any CEO has: credibility. Markets thrive on credibility. Investors place a premium on CEOs who communicate clearly and consistently, and are transparent about their strengths and challenges. When communication becomes engineered for algorithms rather than stakeholders, it creates a hollow effect—polished on the surface, but leaving questions below. This is more than theoretical. A recent study published in Harvard Business Review found that employees rated CEO messages as less helpful if they thought the message was AI-generated—even when it wasn't. Perception alone was enough to damage trust. That finding underscores the growing credibility risk CEOs face when misusing or leaning too heavily on AI. What CEOs Need to Do Now So where does this leave us? The CEOs who win in this new reality won't be the ones with the most AI-polished messaging—they'll be the ones who balance technology with authenticity. Here's how: Speak to Stakeholders, Not Just Algorithms: Say what you mean. Own the hard truths. AI should enhance a message, not sanitize it. AI-generated communications might score well with language models, but stakeholders—investors, employees, customers—aren't grading on polish. They're looking for clarity. Anchor Narratives in Performance: Narratives drive valuation, but they're meaningless without numbers. If the results are strong, show your math. If they're weak, explain why. Don't let AI overinflate optimism. Instead, use it to sharpen transparency. Ensure AI Augments, Not Replaces: AI is great for refining delivery and identifying blind spots, but it can't replace human judgment or instinct. Companies that over-rely on AI-driven clones or sentiment engineering risk losing the real connection that drives stakeholder engagement. Anticipate the Credibility Pivot: As sentiment inflation continues, markets will inevitably adjust. Investors will begin looking for the next differentiator, pivoting from polished delivery to deeper signals of authenticity. CEOs who lean into direct, unvarnished communication will stand out. Get Ahead of What's Coming: The tools analyzing your every word are only getting more advanced. The only sustainable strategy? Consistency. Authenticity. Messages that hold up under scrutiny—algorithmic or human. If your leadership story can't survive deep analysis, it was never leadership to begin with. The Way Forward: Still a Human Game AI is reshaping the rules of executive communications, but the most successful leaders will recognize that technology is a supporting act—not the star of the show. At the end of the day, the algorithms don't close deals, inspire employees, or build relationships with customers—CEOs do. In this next chapter of leadership, The CEOs who win won't be the ones scoring highest on sentiment trackers. They'll be the ones who use AI responsibly, stay grounded in performance, and lead with clarity and authenticity. Because when machines talk past each other, the whole system breaks down.

Exclusive: White House announces AI education pledge
Exclusive: White House announces AI education pledge

Axios

time26 minutes ago

  • Axios

Exclusive: White House announces AI education pledge

The White House is announcing an "AI Education Pledge" on Monday with commitments from more than 60 companies to provide AI education materials to K-12 students the next four years, per an announcement shared exclusively with Axios. The big picture: The education pledge aligns with the Trump administration's full-throated embrace of AI and AI companies in contrast with the Biden administration, which focused on safety rather than encouraging education and technology uptake. Driving the news: Tech companies will provide resources like funding, education materials, technology, curriculum and professional development as part of President Trump's executive order on AI and education. The pledge is meant to "spark curiosity in the technology and prepare the next generation for an AI-enabled economy," per the White House announcement. What they're saying: "Fostering young people's interest and expertise in artificial intelligence is crucial to maintaining American technological dominance," OSTP director Michael Kratsios said in a statement. "These initial pledges from American organizations will help create new educational and workforce development opportunities for our students." Some companies signing the pledge include: Adobe, Amazon, Booz Allen, Cisco, Dell, Google, Intel, MagicSchool, McGraw Hill, Microsoft, NVIDIA, OpenAI, Oracle and Workday. What we're watching: The White House is gearing up for a big month focused on AI policy.

Autonomous Infrastructure And Trustworthy AI In Platform Engineering
Autonomous Infrastructure And Trustworthy AI In Platform Engineering

Forbes

time28 minutes ago

  • Forbes

Autonomous Infrastructure And Trustworthy AI In Platform Engineering

Srikanta Datta Prasad Tumkur is a Senior Staff Engineer at Coupang Global LLC, with over a decade of experience in platform engineering. AI infrastructure is no longer just a support system; it is fast becoming the core of how modern digital businesses operate. As enterprises push harder into model training, inferencing and real-time decision making, their platforms must not only scale but think and act for themselves. This shift from automation to autonomy is now undeniable. According to IDC, more than 75% of new server investments by 2028 will be for AI-optimized systems. These platforms are expected to self-heal, auto-scale and even auto-configure their own networking and compute environments without manual intervention. But autonomy alone is not enough. The bigger question emerging now is: Can we trust these systems? As platform teams begin to hand over operational control to machines, the enterprise must demand something more than speed or scale. It must demand proof. Trust in autonomous infrastructure can't be earned through uptime statistics or clever dashboards. It has to be designed into the platform from day one. This marks a pivotal shift in platform engineering—one that blends policy, provenance, ethics and sustainability directly into the core fabric of infrastructure design. The Trust-Gradient Loop At the heart of this transition is what I call the "trust-gradient loop." Traditional self-healing systems follow a simple loop: sense, decide, act. But that is no longer sufficient in an AI-driven world. The trust-gradient loop introduces two critical checkpoints: explain and verify. Before any action is taken, the system must be able to explain why it is taking that action and verify that it meets policy and compliance standards. This simple but powerful addition allows low-risk incidents to resolve automatically while ensuring that high-risk decisions get routed for human review, with cryptographic evidence and system-level context attached. It's a design principle that bridges autonomy with accountability. This isn't just theory. We're already seeing early implementations across the industry. Microsoft's Network Infrastructure Copilot has shown how artificial intelligence for IT operations (AIOps) platforms can autonomously resolve issues while keeping human operators in the loop with detailed diagnostics. Meanwhile, OpenAI's Preparedness Framework includes documented assurance processes before large-scale model deployment, and the company embeds C2PA-based "content credentials"—cryptographically signed provenance metadata—in all DALL-E 3 images and plans to do the same for Sora-generated videos. These examples highlight how leading organizations are moving from automation that reacts to infrastructure that justifies itself. Governance Governance, too, is being redefined. Traditional governance models relied on process checklists and committee reviews. But in an autonomous world, governance has to operate at machine speed. Frameworks like NIST's AI Risk Management Framework and Gartner's AI TRiSM model now advocate for embedding governance policies directly into the control plane. These policies run alongside the workload and validate everything, from bias in data to environmental impact, as code, not as slideware. When governance becomes machine-readable, platforms can audit themselves in real time and provide traceable records for every decision made. Sustainability One particularly overlooked area in this conversation is sustainability. With the explosion of AI workloads, energy and carbon emissions are becoming boardroom issues. AWS's Well-Architected Framework now includes a sustainability pillar, encouraging developers to treat carbon budgets like any other system service level objective (SLO). Forward-thinking organizations are embedding these budgets into their continuous integration (CI) / continuous delivery (CD) pipelines, ensuring that every container, model or API deployment is evaluated not just for performance but for environmental cost. In time, failing your carbon SLO may be treated as seriously as failing a latency target. The Role Of The Platform Engineer All of this leads to a fundamental redefinition of platform engineering roles. As systems grow more autonomous, the role of the platform engineer evolves from executor to designer of trust frameworks. McKinsey's "The State of AI in 2023" report found that AI high‑performers already channel more than 20% of their digital‑technology budgets into AI, and its 2024 research on tech‑services talent highlights the rise of new responsible AI lead roles that govern ethics, sustainability and explainability. The talent shift is real and accelerating. Platform teams are no longer just writing Terraform and Kubernetes manifests—they are becoming architects of institutional trust. So what does a modern playbook look like? First, define tiers of autonomy for every service: manual, assisted or autonomous. Second, attach explainability and verification gates to any action that crosses a defined risk threshold. Third, integrate sustainability audits into your build and deploy pipelines, not as a corporate social responsibility (CSR) checkbox but as a system constraint. Finally, make trust a live, measurable metric just like uptime, latency or cost. In a world where AI systems learn, evolve and sometimes hallucinate, trust becomes the true North Star. Enterprises that embed trust into their platforms by design, by policy and by measurable action will find themselves not only resilient but differentiated. Their infrastructure won't just run the business—it will defend its reputation. The future of platform engineering is not just about machines that act. It's about machines that explain, verify and earn our confidence. In that sense, autonomy is the easy part. Trust is the hard part, and the most valuable. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store