
AI Impact Awards 2025: Meet the 'Best Of' Winners
Newsweek announced its inaugural AI Impact Awards last month, recognizing 38 companies for tackling everyday problems with innovative solutions.
Winners were announced across 13 categories, including Best of—Most Innovative AI Technology or Service, which highlighted some of the most outstanding cross-industry advancements in the practical use of machine learning.
Among the five recipients in the Best Of category is Ex-Human, a digital platform that allows users to create customizable AI humans to interact with. Ex-Human took home the Extraordinary Impact in AI Human Interactivity or Collaboration award.
Artem Rodichev, the founder and CEO of Ex-Human, told Newsweek that he started his company in response to the growing loneliness epidemic. According to the U.S. Surgeon General, some 30 percent of U.S. adults experience feelings of loneliness once a week. Those figures are even higher in young Americans. Roughly 80 percent of Gen Z report feeling lonely. The epidemic is also keeping college kids up at night, and studies show that a lack of connection can lead to negative health outcomes.
To help bridge that gap, Rodichev sought to create empathetic characters, or what he described as "non-boring AI."
"If you chat with ChatGPT, it doesn't feel like you are chatting with your friend," Rodichev said. "You feel more like you're chatting with Mr. Wikipedia. The responses are informative, but they're boring."
What his company wanted to create, instead, was "AI that can feel, that can love, that can hate, that can feel emotions and can connect on an emotional level with users," Rodichev said. He cited the 1982 sci-fi classic Blade Runner and the Oscar-nominated film Her as two main forms of inspiration.
AI Impact Awards: Best of Most Innovative
AI Impact Awards: Best of Most Innovative
Newsweek Illustration
Trained on millions of real conversations, Ex-Human enables companies to create personalized AI companions that can strengthen digital connections between those characters and human users.
Internal data suggests Ex-Human's technology is working. Their users spend an average of 90 minutes per day interacting with their AI companions, exchanging over 600 messages per week on average.
"At any moment, a user can decide, 'It's boring to chat with a character. I'll go check my Instagram feed. I'll watch this funny TikTok video.' But for some reason, they stay," Rodichev said. "They stay and continue to chat with these companions."
"A lot of these people struggle with social connections. They don't have a lot of friends and they have social anxiety," he said. "By chatting with these companions, they can reduce the social anxiety, they can improve their mental health. Because these kind of fake companions, they act as social trainers. They never judge you, they're available to you 24/7, you can discuss any fears, everything that you have in your head in a no-judgment environment."
Ex-Human projects that it will have 10 million users by early next year. The company has also raised over $3.7 million from investors, including venture capitalist firm Andreessen Horowitz.
Rodichev said while Ex-Human's AIs have been popular among young people, he foresees it becoming more popular among the elderly—another population that often suffers from loneliness—as AI adoption becomes more widespread. He also anticipated that Ex-Human would be a popular technology for companies with big IP portfolios, like Disney, whose popular characters may be "heavily underutilized" in the age of AI.
Also among this year's "Best Of" winners is Fal.ai, a developer-focused platform that allows users to create AI-generated audio, video and images. Fal.ai was the recipient of this year's Extraordinary Impact in General Purpose AI Tool or Service award.
Co-founder Gorkem Yurtseven told Newsweek that the award was particularly meaningful to him "because it recognizes generative media as its own market and sector that is very promising and growing really fast."
Fal.ai is almost exclusively focused on B2B, selling AI media tools to help other companies generate audio, video and images for their business. Essentially a "building block," the AI allows different clients to have unique experiences, Yurtseven explained. So far, the biggest categories for fal.ai are advertising and marketing, and retail and e-commerce.
"AI-generated ads are a very clear product-market fit. You can create unlimited versions of the same ad and test it to understand which ones perform better than the others. The cost of creation also goes down to zero," Yurtseven said.
In the retail space, he said fal.ai has commonly been used for product photography. His company's capabilities allow businesses to display products on diverse background or in various settings, and to even build experiences where customers are pictured wearing the items.
Yurtseven believes that in some ways, he and his co-founder, Burkay Gur, got lucky. When large language models (LLM) started to gain steam, many thought the market for image and video models was too small.
"Turns out, they were wrong," Yurtseven chuckled. "The market is very big, and now, everyone understands it."
"We were able to ride the LLM AI wave, in a sense," he said. "People got excited about AI. It was, in the beginning, mostly LLMs. But image and media models got included into that as well, and you were able to tap into the AI budgets of different companies that were created because of the general AI wave."
The one sector that he's waiting to embrace AI-generated audio, images and videos is social media. Yurtseven said this could be on an existing app or a completely new platform, but so far, "a true social media app, at the largest scale, hasn't been able to utilize this in a fun and engaging way."
"I think it's going to be very interesting once someone figures that out," he said. "There's a lot of interesting and creative ways people are using this in smaller circles, but it hasn't reached a big social network where it becomes a daily part of our lives, similar to how Snapchat stories or Instagram stories became. So, I'm still expecting that's going to happen."
There's no doubt that AI continues to evolve at a rapid pace, but initiatives to address AI's potential dangers and ethical concerns haven't quite matched that speed.
The winner of this year's Extraordinary Impact in AI Transparency or Responsibility award is EY, which created a responsible AI framework compliant with one of the most comprehensive AI regulations to date: the European Union's Artificial Intelligence Act, which took effect on August 1, 2024.
Joe Depa, EY's global chief innovation officer, told Newsweek that developing the framework was a natural next step for EY, a global professional services company with 400,000 employees that does everything from consulting to tax to assurance to strategy and transactions.
"If you think about what that is, it's a lot of data," Depa said. "And when I think about data, one of the most important components around data right now is responsible AI."
As a company operating in 150 countries worldwide, EY has seen firsthand how each country approaches AI differently. While some have more restrictive policies, others have almost none around responsible AI. This means there's no real "playbook" for what works and what doesn't work, Depa said.
"It used to be that there was policy that you could follow. The policymakers would set policy, and then you could follow that policy," he said. "In this case, the speed of technology and the speed of AI and the rate of technology and pace of technology evolution is creating an environment where we have to be much more proactive about the way that we integrate responsible AI into everything we do, until the policy makers can catch up."
"Now, it's incumbent upon leaders, and in particular, leaders that have technology prowess and have data sets to make sure that responsible AI is integrated into everything we do," Depa said.
As part of their framework, EY teams at the company implemented firm-wide AI definitions that would promote consistency and clarity across all business functions. So far, their clients have been excited about the framework, Depa said.
"At EY, trust is everything that we do for our clients," he said. "We want to be a trusted brand that they can they can trust with their data—their tax data, the ability to assure that the data from our insurance business and then hopefully help them lead through this transformation."
"We're really proud of the award. We're excited for it. It confirms our approach, it confirms our understanding, and it confirms some of the core values that we have at EY," Depa said.
As part of Newsweek's AI Impact Awards, Pharebio and Axon were also recognized in the Best of—Most Innovative AI Technology or Service category. Pharebio received the Extraordinary Impact in AI Innovation award, while Axon received the Extraordinary Impact in Commercial Tool or Service Award.
To see the full list of winners and awards, visit the official page for Newsweek's AI Impact Awards.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
22 minutes ago
- The Hill
Trump signs executive orders to fast-track data center construction, target ‘woke' AI
President Trump signed a trio of executive orders related to artificial intelligence (AI) on Wednesday, focusing on boosting data center construction and the adoption of American technology while targeting 'woke' AI. The three executive orders seek to fast-track permitting for data centers, promote the export of the American technology stack abroad and bar 'woke' AI systems from federal contracting. 'Under this administration, our innovation will be unmatched, and our capabilities will be unrivaled,' Trump said at an AI summit hosted by the Hill & Valley Forum and the 'All-In' podcast, where he signed the orders Wednesday evening. 'With the help of many of the people in this room, America's ultimate triumph will be absolutely unstoppable,' he continued. 'We will be unstoppable as a nation. Again, we're way ahead, and we want to stay that way.' The orders accompany the Trump administration's AI Action Plan released earlier Wednesday, which lays out a three-pronged approach to 'winning the race' on AI. In the framework, the administration called to cut federal and state AI regulations in an effort to boost innovation, pushed to expedite the buildout of AI infrastructure and sought to encourage the adoption of American technology abroad. Each of Trump's executive orders seeks to target at least some of the policy goals detailed in his AI action plan. The data center order calls on the Council for Environmental Quality to establish new categorical exclusions for certain data center projects that 'normally do not have a significant effect on the human environment.' It also seeks to identify projects that qualify for expedited permitting review. 'My administration will use every tool at our disposal to ensure that the United States can build and retain the largest, most powerful and most advanced AI infrastructure anywhere on the planet,' Trump said Wednesday evening. Meanwhile, his AI export order calls for the creation of an American AI Exports Program that will develop full-stack AI export packages, featuring U.S. chips, AI models and applications. Trump contrasted his approach with that of former President Biden, who released the AI diffusion rule at the tail-end of his presidency, placing caps on chip sales to most countries around the world. The rule faced pushback from the semiconductor industry and was repealed by the Trump administration in May. The third order targeting 'woke' AI seeks to limit agencies from signing contracts for AI models unless they are considered 'truth seeking' and maintain 'ideological neutrality,' which it defines as those that 'do not manipulate responses in favor of ideological dogmas such as DEI.'


CNET
22 minutes ago
- CNET
Trump's AI Action Plan Is Here: 5 Key Takeaways
The Trump administration on Wednesday laid out the steps it plans to take to ensure "global AI dominance" for the US, with an AI Action Plan that calls for cutting regulations to speed up the development of artificial intelligence tools and the infrastructure to power them. Critics said the plan is a handout to tech and fossil fuel companies, slashing rules that could protect consumers, prevent pollution and fight climate change. Though the plan itself isn't binding (it includes dozens of policy recommendations), Trump did sign three executive orders to put some of these steps into action. The changes and proposals follow how the Trump administration has approached AI and technology over the past six months -- giving tech companies a largely free hand; focusing on beating China; and prioritizing the construction of data centers, factories and fossil fuel power plants over environmental regulations. It's seizing on the moment created by the arrival of ChatGPT less than three years ago and the ensuing wave of generative AI efforts by Google, Meta and others. "My administration will use every tool at our disposal to ensure that the United States can build and maintain the largest and most powerful and advanced AI infrastructure anywhere on the planet," Trump said during remarks Wednesday evening at a summit presented by the Hill and Valley Forum and the All-In Podcast. He signed the three executive orders at the event. The administration and tech industry groups touted the plan as a framework for US success in a race against China. "President Trump's AI Action Plan presents a blueprint to usher in a new era of US AI dominance," Jason Oxman, president and CEO of the tech industry trade group ITI, said in a statement. Consumer groups said the plan focuses on deregulation and would hurt consumers by reducing the rules that could protect them. "Whether it's promoting the use of federal land for dirty data centers, giving the FTC orders to question past cases, or attempting to revive some version of the soundly defeated AI moratorium by tying federal funds to not having 'onerous regulation' according to the FCC, this is an unwelcome distraction at a critical time for government to get consumer protection right with increasing AI use and abuse," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. Here's a look at the proposals in the plan. Slashing regulations for AI infrastructure The plan says AI growth will require infrastructure, including chip factories, data centers and more energy generation. And it blames environmental regulations for getting in the way. In response, it proposes exemptions for AI-related construction from certain environmental regulations, including those aimed at protecting clean water and air. It also suggests making federal lands available for data center construction and related power plants. To provide energy for all those data centers, the plan calls for steps to prevent the "premature decommissioning of critical power generation resources." This likely refers to keeping coal-fired power plants and other mostly fossil-fuel-driven infrastructure online for longer. In his remarks, Trump specifically touted his support for coal and nuclear power plants. The administration also called to prioritize the connection of new "reliable, dispatchable power sources" to the grid and specifically named nuclear fission and fusion and advanced geothermal generation. Earlier this month, the president signed a bill that would end many tax credits and incentives for renewable energy -- wind and solar -- years earlier than planned. Wind and solar make up the bulk of the new energy generation being added to the US grid right now. "This US AI Action Plan doesn't just open the door for Big Tech and Big Oil to team up, it unhinges and removes any and all doors -- it opens the floodgates, continuing to kneecap our communities' rights to protect ourselves," KD Chavez, executive director of the Climate Justice Alliance, said in a statement. "With tech and oil's track records on human rights and their role in the climate crisis, and what they are already doing now to force AI dominance, we need more corporate and environmental oversight, not less." Fewer rules around AI technology Congress ended up not including a moratorium on state AI rules in the recently passed tax and spending bill but efforts to cut regulations around AI continue from the executive branch in the action plan. "AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level," the plan says. The plan recommends that several federal agencies review whether existing or proposed rules would interfere with the development and deployment of AI. The feds would consider whether states' regulatory climate is favorable for AI when deciding to award funding. Federal Trade Commission investigations and orders would be reviewed to determine that they don't "advance theories of liability that unduly burden AI innovation." Those rule changes could undermine efforts to protect consumers from problems caused by AI, critics said. "Companies -- including AI companies -- have a legal obligation to protect their products from being used for harm," Justin Brookman, director of tech policy at Consumer Reports, said in a statement. "When a company makes design choices that increase the risk their product will be used for harm, or when the risks are particularly serious, companies should bear legal responsibility." Ideology and large language models The plan proposes some steps around ensuring AI "protects free speech and American values," further steps in the Trump administration's efforts to roll back federal policies around what it refers to as "diversity, equity and inclusion," along with references to the problems of misinformation and climate change. It calls for eliminating references to those items in the National Institute of Standards and Technology's AI Risk Management Framework. Federal agencies would only be allowed to contract with AI developers who "ensure that their systems are objective and free from top-down ideological bias." The Trump administration has recently announced contracts of up to $200 million each to developers Anthropic, Google, OpenAI and xAI. Grok, the model from Elon Musk's xAI, has recently come under fire for spouting antisemitism and hate speech. Dealing with workforce challenges The plan acknowledges that AI will "transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition" and recommends actions by federal agencies including the Department of Labor intended to mitigate the harms of AI-driven job displacement. The plan calls for the Bureau of Labor Statistics, Census Bureau and Bureau of Economic Analysis to monitor how AI affects the labor market using data already collected. An AI Workforce Research Hub under the Department of Labor would lead monitoring and issue policy recommendations. Most of the actual plans to help workers displaced by AI involve retraining those workers for other jobs or to help states do the same. Other jobs-related recommendations are aimed at boosting the kinds of jobs needed for all those data centers and chip manufacturing plants -- like electricians and HVAC technicians. These plans and others to encourage AI literacy and AI use in education drew praise from the Software & Information Industry Association, a tech industry trade group. "These are key components for building trust and ensuring all communities can participate in and benefit from AI's potential," Paul Lekas, SIIA's senior vice president of global public policy, said in a statement. More AI in government The plan envisions more use of AI by the federal government. A talent exchange program would allow employees with experience or talent in AI to be detailed to other agencies in need. The General Services Administration would create a toolbox of AI models that would help agencies see models to choose from and use cases in other parts of the government. Every government agency would also be required to ensure employees who could use AI in their jobs have access to and training for AI tools. Many recommendations focus specifically on the Department of Defense, including creating a virtual proving ground for AI and autonomous systems. AI companies have already been signing contracts with the DOD to develop AI tools for the military.


TechCrunch
22 minutes ago
- TechCrunch
A new AI coding challenge just published its first results – and they aren't pretty
A new AI coding challenge has revealed its first winner — and set a new bar for AI-powered software engineers. On Wednesday at 5pm PST, the nonprofit Laude Institute announced the first winner of the K Prize, a multi-round AI coding challenge launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian prompt engineer named Eduardo Rocha de Andrade, who will receive $50,000 for the prize. But more surprising than the win was his final score: he won with correct answers to just 7.5% of the questions on the test. 'We're glad we built a benchmark that is actually hard,' said Konwinski. 'Benchmarks should be hard if they're going to matter.' Konwinski has pledged $1 million to the first open-source model that can score higher than 90% on the test. Similar to the well-known SWE-Bench system, the K Prize tests models against flagged issues from GitHub as a test of how well models can deal with real-world programming problems. But while SWE-Bench is based on a fixed set of problems that models can train against, the K Prize is designed as a 'contamination-free version of SWE-Bench,' using a timed entry system to guard against any benchmark-specific training. For round one, models were due by March 12th. The K Prize organizers then built the test using only GitHub issues flagged after that date. The 7.5% top score stands in marked contrast to SWE-Bench itself, which currently shows a 75% top score on its easier 'Verified' test and 34% on its harder 'Full' test. Konwinski still isn't sure whether the disparity is due to contamination on SWE-Bench or just the challenge of collecting new issues from GitHub, but he expects the K Prize project to answer the question soon. 'As we get more runs of the thing, we'll have a better sense,' he told TechCrunch, 'because we expect people to adapt to the dynamics of competing on this every few months.' It might seem like an odd place to fall short, given the wide range of AI coding tools already publicly available – but with benchmarks becoming too easy, many critics see projects like the K Prize as a necessary step toward solving AI's growing evaluation problem. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW 'I'm quite bullish about building new tests for existing benchmarks,' says Princeton researcher Sayash Kapoor, who put forward a similar idea in a recent paper. 'Without such experiments, we can't actually tell if the issue is contamination, or even just targeting the SWE-Bench leaderboard with a human in the loop.' For Konwinski, it's not just a better benchmark, but an open challenge to the rest of the industry. 'If you listen to the hype, it's like we should be seeing AI doctors and AI lawyers and AI software engineers, and that's just not true,' he says. 'If we can't even get more than 10% on a contamination free SWE-Bench, that's the reality check for me.'