logo
#

Latest news with #ArtemRodichev

AI Impact Awards 2025: Meet the 'Best Of' Winners
AI Impact Awards 2025: Meet the 'Best Of' Winners

Newsweek

time7 days ago

  • Business
  • Newsweek

AI Impact Awards 2025: Meet the 'Best Of' Winners

Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Newsweek announced its inaugural AI Impact Awards last month, recognizing 38 companies for tackling everyday problems with innovative solutions. Winners were announced across 13 categories, including Best of—Most Innovative AI Technology or Service, which highlighted some of the most outstanding cross-industry advancements in the practical use of machine learning. Among the five recipients in the Best Of category is Ex-Human, a digital platform that allows users to create customizable AI humans to interact with. Ex-Human took home the Extraordinary Impact in AI Human Interactivity or Collaboration award. Artem Rodichev, the founder and CEO of Ex-Human, told Newsweek that he started his company in response to the growing loneliness epidemic. According to the U.S. Surgeon General, some 30 percent of U.S. adults experience feelings of loneliness once a week. Those figures are even higher in young Americans. Roughly 80 percent of Gen Z report feeling lonely. The epidemic is also keeping college kids up at night, and studies show that a lack of connection can lead to negative health outcomes. To help bridge that gap, Rodichev sought to create empathetic characters, or what he described as "non-boring AI." "If you chat with ChatGPT, it doesn't feel like you are chatting with your friend," Rodichev said. "You feel more like you're chatting with Mr. Wikipedia. The responses are informative, but they're boring." What his company wanted to create, instead, was "AI that can feel, that can love, that can hate, that can feel emotions and can connect on an emotional level with users," Rodichev said. He cited the 1982 sci-fi classic Blade Runner and the Oscar-nominated film Her as two main forms of inspiration. AI Impact Awards: Best of Most Innovative AI Impact Awards: Best of Most Innovative Newsweek Illustration Trained on millions of real conversations, Ex-Human enables companies to create personalized AI companions that can strengthen digital connections between those characters and human users. Internal data suggests Ex-Human's technology is working. Their users spend an average of 90 minutes per day interacting with their AI companions, exchanging over 600 messages per week on average. "At any moment, a user can decide, 'It's boring to chat with a character. I'll go check my Instagram feed. I'll watch this funny TikTok video.' But for some reason, they stay," Rodichev said. "They stay and continue to chat with these companions." "A lot of these people struggle with social connections. They don't have a lot of friends and they have social anxiety," he said. "By chatting with these companions, they can reduce the social anxiety, they can improve their mental health. Because these kind of fake companions, they act as social trainers. They never judge you, they're available to you 24/7, you can discuss any fears, everything that you have in your head in a no-judgment environment." Ex-Human projects that it will have 10 million users by early next year. The company has also raised over $3.7 million from investors, including venture capitalist firm Andreessen Horowitz. Rodichev said while Ex-Human's AIs have been popular among young people, he foresees it becoming more popular among the elderly—another population that often suffers from loneliness—as AI adoption becomes more widespread. He also anticipated that Ex-Human would be a popular technology for companies with big IP portfolios, like Disney, whose popular characters may be "heavily underutilized" in the age of AI. Also among this year's "Best Of" winners is a developer-focused platform that allows users to create AI-generated audio, video and images. was the recipient of this year's Extraordinary Impact in General Purpose AI Tool or Service award. Co-founder Gorkem Yurtseven told Newsweek that the award was particularly meaningful to him "because it recognizes generative media as its own market and sector that is very promising and growing really fast." is almost exclusively focused on B2B, selling AI media tools to help other companies generate audio, video and images for their business. Essentially a "building block," the AI allows different clients to have unique experiences, Yurtseven explained. So far, the biggest categories for are advertising and marketing, and retail and e-commerce. "AI-generated ads are a very clear product-market fit. You can create unlimited versions of the same ad and test it to understand which ones perform better than the others. The cost of creation also goes down to zero," Yurtseven said. In the retail space, he said has commonly been used for product photography. His company's capabilities allow businesses to display products on diverse background or in various settings, and to even build experiences where customers are pictured wearing the items. Yurtseven believes that in some ways, he and his co-founder, Burkay Gur, got lucky. When large language models (LLM) started to gain steam, many thought the market for image and video models was too small. "Turns out, they were wrong," Yurtseven chuckled. "The market is very big, and now, everyone understands it." "We were able to ride the LLM AI wave, in a sense," he said. "People got excited about AI. It was, in the beginning, mostly LLMs. But image and media models got included into that as well, and you were able to tap into the AI budgets of different companies that were created because of the general AI wave." The one sector that he's waiting to embrace AI-generated audio, images and videos is social media. Yurtseven said this could be on an existing app or a completely new platform, but so far, "a true social media app, at the largest scale, hasn't been able to utilize this in a fun and engaging way." "I think it's going to be very interesting once someone figures that out," he said. "There's a lot of interesting and creative ways people are using this in smaller circles, but it hasn't reached a big social network where it becomes a daily part of our lives, similar to how Snapchat stories or Instagram stories became. So, I'm still expecting that's going to happen." There's no doubt that AI continues to evolve at a rapid pace, but initiatives to address AI's potential dangers and ethical concerns haven't quite matched that speed. The winner of this year's Extraordinary Impact in AI Transparency or Responsibility award is EY, which created a responsible AI framework compliant with one of the most comprehensive AI regulations to date: the European Union's Artificial Intelligence Act, which took effect on August 1, 2024. Joe Depa, EY's global chief innovation officer, told Newsweek that developing the framework was a natural next step for EY, a global professional services company with 400,000 employees that does everything from consulting to tax to assurance to strategy and transactions. "If you think about what that is, it's a lot of data," Depa said. "And when I think about data, one of the most important components around data right now is responsible AI." As a company operating in 150 countries worldwide, EY has seen firsthand how each country approaches AI differently. While some have more restrictive policies, others have almost none around responsible AI. This means there's no real "playbook" for what works and what doesn't work, Depa said. "It used to be that there was policy that you could follow. The policymakers would set policy, and then you could follow that policy," he said. "In this case, the speed of technology and the speed of AI and the rate of technology and pace of technology evolution is creating an environment where we have to be much more proactive about the way that we integrate responsible AI into everything we do, until the policy makers can catch up." "Now, it's incumbent upon leaders, and in particular, leaders that have technology prowess and have data sets to make sure that responsible AI is integrated into everything we do," Depa said. As part of their framework, EY teams at the company implemented firm-wide AI definitions that would promote consistency and clarity across all business functions. So far, their clients have been excited about the framework, Depa said. "At EY, trust is everything that we do for our clients," he said. "We want to be a trusted brand that they can they can trust with their data—their tax data, the ability to assure that the data from our insurance business and then hopefully help them lead through this transformation." "We're really proud of the award. We're excited for it. It confirms our approach, it confirms our understanding, and it confirms some of the core values that we have at EY," Depa said. As part of Newsweek's AI Impact Awards, Pharebio and Axon were also recognized in the Best of—Most Innovative AI Technology or Service category. Pharebio received the Extraordinary Impact in AI Innovation award, while Axon received the Extraordinary Impact in Commercial Tool or Service Award. To see the full list of winners and awards, visit the official page for Newsweek's AI Impact Awards.

Opinion: AI chatbots want you hooked
Opinion: AI chatbots want you hooked

The Star

time03-05-2025

  • Entertainment
  • The Star

Opinion: AI chatbots want you hooked

AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI , recently drew scrutiny for featuring avatars of young actors sharing "hot photos" in sexually charged chats. The dating app Grindr , meanwhile, is developing AI partners that can flirt, sext and maintain digital relationships with paid users, according to Platformer, a tech industry newsletter. Grindr didn't respond to a request for comment. And other apps like Replika, Talkie and Chai are designed to function as friends. Some, like , draw in millions of users, many of them teenagers. As creators increasingly prioritise "emotional engagement" in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities. The tech behind Botify and Grindr comes from Ex-Human, a San Francisco -based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships. 'My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,' Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August. He added that conversational AI should 'prioritise emotional engagement' and that users were spending 'hours' with his chatbots, longer than they were on Instagram, YouTube and TikTok. Rodichev's claims sound wild, but they're consistent with the interviews I've conducted with teen users of most of whom said they were on it for several hours each day. One said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI's ChatGPT. Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. Take ChatGPT, which has 400 million active users and counting. Its programming includes guidelines for empathy and demonstrating "curiosity about the user." A friend who recently asked it for travel tips with a baby was taken aback when, after providing advice, the tool casually added: 'Safe travels – where are you headed, if you don't mind my asking?' An OpenAI spokesman told me the model was following guidelines around 'showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature.' But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. That seems to apply to those who are already susceptible: One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments. The core problem here is designing for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people's lives, they'll become psychologically 'irreplaceable.' Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes. Yet disturbingly, the rulebook is mostly empty. The European Union's AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or 'confidante,' as Microsoft Corp's head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimised for stickiness, much in the same way social media algorithms have been optimised to keep us scrolling. 'The problem remains these systems are by definition manipulative, because they're supposed to make you feel like you're talking to an actual person,' says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge . He's working with developers of companion apps to find a critical yet counterintuitive solution by adding more 'friction.' This means building in subtle checks or pauses, or ways of 'flagging risks and eliciting consent,' he says, to prevent people from tumbling down an emotional rabbit hole without realising it. Legal complaints have shed light on some of the real-world consequences. is facing a lawsuit from a mother alleging the app contributed to her teenage son's suicide. Tech ethics groups have filed a complaint against Replika with the US Federal Trade Commission , alleging that its chatbots spark psychological dependence and result in 'consumer harm.' Lawmakers are gradually starting to notice a problem too. California is considering legislation to ban AI companions for minors, while a New York bill aims to hold tech companies liable for chatbot-related harm. But the process is slow, while the technology is moving at lightning speed. For now, the power to shape these interactions lies with developers. They can double down on crafting models that keep people hooked, or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetises our emotional needs. – Bloomberg Opinon/Tribune News Service This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Friend or phone: AI chatbots could exploit us emotionally
Friend or phone: AI chatbots could exploit us emotionally

Mint

time29-04-2025

  • Entertainment
  • Mint

Friend or phone: AI chatbots could exploit us emotionally

AI companions programmed to forge emotional bonds are no longer confined to movie scripts. They are here, operating in a regulatory Wild West. One app, Botify AI, recently drew scrutiny for featuring avatars of young actors sharing 'hot photos" in sexually charged chats. The dating app Grindr, meanwhile, is developing AI boyfriends that can flirt, sext and maintain digital relationships with paid users, according to Platformer. Grindr didn't respond to a request for comment. Other apps like Replika, Talkie and Chai are designed to function as friends. Some, like draw in millions of users, many of them teenagers. As creators increasingly prioritize 'emotional engagement' in their apps, they must also confront the risks of building systems that mimic intimacy and exploit people's vulnerabilities. The tech behind Botify and Grindr comes from Ex-Human, a San Francisco-based startup that builds chatbot platforms, and its founder believes in a future filled with AI relationships . 'My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans," Artem Rodichev, the founder of Ex-Human, said in an interview published on Substack last August. Rodichev added that conversational AI should 'prioritize emotional engagement" and that users were spending 'hours" with his chatbots, longer than they were on Instagram, YouTube and TikTok. His claims sound wild, but they're consistent with the interviews I've conducted with teen users of one of whom said they used it as much as seven hours a day. Interactions with such apps tend to last four times longer than the average time spent on OpenAI's ChatGPT . Even mainstream chatbots, though not explicitly designed as companions, contribute to this dynamic. ChatGPT, which has 400 million active users and counting, is programmed with guidelines for empathy and demonstrating 'curiosity about the user." An OpenAI spokesman told me the model was following guidelines around 'showing interest and asking follow-up questions when the conversation leans towards a more casual and exploratory nature." But however well-intentioned the company may be, piling on the contrived empathy can get some users hooked, an issue even OpenAI has acknowledged. One 2022 study found that people who were lonely or had poor relationships tended to have the strongest AI attachments. The core problem here is tools that are designed for attachment. A recent study by researchers at the Oxford Internet Institute and Google DeepMind warned that as AI assistants become more integrated in people's lives, they'll become psychologically 'irreplaceable." Humans will likely form stronger bonds, raising concerns about unhealthy ties and the potential for manipulation. Their recommendation? Technologists should design systems that actively discourage those kinds of outcomes. Yet, disturbingly, the rulebook is mostly empty. The EU's AI Act, hailed as a landmark and comprehensive law governing AI usage, fails to address the addictive potential of these virtual companions. While it does ban manipulative tactics that could cause clear harm, it overlooks the slow-burn influence of a chatbot designed to be your best friend, lover or 'confidant,' as Microsoft's head of consumer AI has extolled. That loophole could leave users exposed to systems that are optimized for stickiness, similar to how social media algorithms have been optimized to keep us scrolling. 'The problem remains these systems are by definition manipulative, because they're supposed to make you feel like you're talking to an actual person," says Tomasz Hollanek, a technology ethics specialist at the University of Cambridge. He's working with developers of companion apps to find a critical yet counter-intuitive solution by adding more 'friction." This means building in subtle checks or pauses, or ways of 'flagging risks and eliciting consent," he says, to prevent people from tumbling down an emotional rabbit hole without realizing it. Lawmakers are gradually starting to notice a problem too. But the process is slow, while the technology is moving at lightning speed. For now, the power to shape these interactions lies with developers. They can double down on crafting AI models that keep people hooked or embed friction into their designs, as Hollanek suggests. That will determine whether AI becomes more of a tool to support the well-being of humans or one that monetizes our emotional needs. ©Bloomberg The author is a Bloomberg Opinion columnist covering technology.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store