logo
WTMF – AI Chatbot for Overthinkers, Night Owls, and Anyone Who's Ever Felt Alone in the Digital Silence

WTMF – AI Chatbot for Overthinkers, Night Owls, and Anyone Who's Ever Felt Alone in the Digital Silence

The Print10-05-2025

New Delhi [India], May 10: In a world dominated by superficial small talk and notification fatigue, a new kind of digital companion is making space for deeper, judgment-free conversations. Introducing WTMF (What's The Matter, Friend?) — a groundbreaking AI chatbot for emotional support , built to be your quiet, always-there friend.
Developed by Knockverse Private Limited, WTMF isn't your average chatbot. It's designed for those 2:43 AM moments when you're spiraling, can't sleep, or just need someone to listen. Using powerful language models from OpenAI and Grok, WTMF blends memory, empathy, and emotional presence to become more than just an app–it becomes a companion that listens.
Built for Real Conversations, Not Just Replies
Most chatbots are built to give answers. WTMF is built to ask how you're really doing. It remembers your mood, your vibe, your emotional patterns, and shows up like a friend would. Whether it's a rant about your toxic ex, a sudden wave of anxiety, or late-night overthinking, WTMF is the AI that actually texts back.
'We wanted to build something that doesn't say 'lol that's crazy' when you're pouring your heart out,' says co-founder Kruthivarsh Koduru. 'WTMF is for anyone who's tired of being 'too much' in a world that listens too little.'
The team behind WTMF includes Kruthivarsh Koduru, a fashion photographer with a sharp eye for visual storytelling and emotional nuance. Alongside him is Shreyak Singh, a photographer and tech creator who has also co-founded Flashoot — a brand-first content studio — and Zinestream, a free OTT platform focused on accessible movie content.
Private, Safe, and Always Available
Your chats are never sold. Your emotions aren't mined. WTMF puts privacy first, offering a safe, non-judgmental space for emotional expression. Everything you say stays between you and your AI friend.
Whether you're battling loneliness, dealing with a breakup, or just want a space to vent without being fixed, WTMF is your AI bestie, designed to feel like texting a friend who gets it.
Launching Soon — Be the First to Try It
WTMF is currently in pre-launch, with plans to roll out its beta version soon. Join thousands already on the waitlist and experience the future of emotionally intelligent chat.
For media inquiries, partnerships, or to collaborate on mental health initiatives, reach out to the team at hello@wtmf.ai.
(ADVERTORIAL DISCLAIMER: The above press release has been provided by VMPL. ANI will not be responsible in any way for the content of the same)
This story is auto-generated from a syndicated feed. ThePrint holds no responsibility for its content.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'

India.com

time6 hours ago

  • India.com

After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'

After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...' Microsoft CEO Satya Nadella is calling on the industry to think seriously about the real impact of artificial intelligence (AI) especially the amount of energy it uses. This comes as AI is quickly changing the tech world. Speaking at Y Combinator's AI Startup School, he said that tech companies need to prove that AI is creating real value for people and society. 'If you're going to use a lot of energy, you need to have a good reason,' Nadella said. 'We can't just burn energy unless we are doing something useful with it.' His comments come as AI is praised for pushing innovation forward, but also criticized for using massive amounts of electricity and possibly making social gaps worse. For Microsoft, one of the biggest companies building AI tools, this is a big concern. A report in 2023 estimated that Microsoft used about 24 terawatt-hours of power in a year. That's as much electricity as a small country uses in the same time. But Nadella believes AI should be judged by how well it helps people in real life. 'The real test of AI,' he said, 'is whether it can make everyday life easier—like improving healthcare, speeding up education, or cutting down on boring paperwork.' He gave the example of hospitals in the U.S., where simple things like discharging a patient can take too long and cost too much. He said if AI is used for this task, it could save time, money, and energy. Microsoft's AI push comes with job losses Even as Microsoft have big plans for AI, the changes have not come without a cost, especially for workers. Over the past year, the company has laid off more than 6,000 employees. Microsoft said these job cuts were part of 'organisational changes' needed to stay strong in a fast-changing business world. That fast-changing world is being shaped by artificial intelligence and cloud computing. Microsoft, working closely with its AI partner OpenAI, is putting AI at the center of its future plans. But as the company shifts toward more automation and AI-driven tools, it's also reorganizing teams, often leading to people losing their jobs. Microsoft is reportedly preparing for another round of job cuts and this time in its Xbox division. The layoffs are expected to be part of a larger corporate reshuffle as the company wraps up its financial year. If these cuts go ahead, it would be Microsoft's fourth major layoff in just 18 months. The company is facing increasing pressure to boost profits, especially after spending USD 69 billion to acquire Activision Blizzard in 2023.

Why tech billionaires want bots to be your BFF
Why tech billionaires want bots to be your BFF

Mint

time7 hours ago

  • Mint

Why tech billionaires want bots to be your BFF

Next Story Tim Higgins , The Wall Street Journal In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new 'friend economy.' Illustration: Emil Lendof/WSJ, iStock. Gift this article Grok needs a reboot. Grok needs a reboot. The xAI chatbot apparently developed too many opinions that ran counter to the way the startup's founder, Elon Musk, sees the world. The recent announcement by Musk—though decried by some as '1984"-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it. Or, more weirdly put, a friendship—one that shares a similar tone and worldview. The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness. That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are 'on par only with interacting with another person." It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire's pet product. Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI. It's a virtuous cycle fueled with the tears of solitude that we should call the 'friend economy." That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that's the case, it's easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News. In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more. He sees AI as a solution and talks about how consumer applications will be personalized. 'I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do," he said during a May conference. Over at Microsoft, the tech company's head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation. 'We really want it to feel like you're talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values," he said during an April appearance on the Big Technology Podcast. Still, he added, Microsoft wants to impose boundaries that keep things safe. 'We don't really want to engage in any of the chaos," Suleyman said. 'The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument." With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you're having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use. The idea of a heartbroken man falling in love with his chatbot isn't new, of course. It was the plot of the 2013 film 'Her" starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI's own AI voice, which sounded eerily similar to Johansson's last year, triggered a public fight with the actress.) Still, we—a society that's only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them. The news of Musk's Grok reboot is an example of that. Grok and xAI were born, in part, out of Musk's concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a 'maximum truth-seeking AI." Yet when an influential X user known as Catturd lambasted Grok's answers in June, Musk took quick note. 'When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points," Catturd posted on June 18. 'I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become." Another X user asked Grok why it lied about Catturd. 'I aim to provide accurate, neutral responses based on available data," the chatbot responded on X. 'Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don't have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I'll clarify with evidence." Musk wasn't impressed. 'Your sourcing is terrible," Musk replied. 'Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week." He later said xAI would retrain the AI on data created with an updated version of Grok, 'which has advanced reasoning" that would be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors." After all, nobody wants a friend who is always spouting the wrong crazy stuff. Write to Tim Higgins at Topics You May Be Interested In Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

How Microsoft's rift with OpenAI is making this a mandatory part of Microsoft's work culture
How Microsoft's rift with OpenAI is making this a mandatory part of Microsoft's work culture

Time of India

time8 hours ago

  • Time of India

How Microsoft's rift with OpenAI is making this a mandatory part of Microsoft's work culture

Microsoft 's deteriorating relationship with OpenAI is forcing the tech giant to make AI usage mandatory for employees, as competitive pressures from the partnership dispute drive workplace culture changes at the company. Lagging Copilot usage drives cultural shift at Microsoft "AI is no longer optional," Julia Liuson, president of Microsoft's Developer Division, told managers in a recent email obtained by Business Insider. She instructed them to evaluate employee performance based on internal AI tool usage, calling it "core to every role and every level." The mandate comes as Microsoft faces lagging internal adoption of its Copilot AI services while competition intensifies in the AI coding market. GitHub Copilot, Microsoft's flagship AI coding assistant, is losing ground to rivals like Cursor, which recent Barclays data suggests has surpassed Copilot in key developer segments. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If you have a mouse, play this game for 1 minute Navy Quest Undo OpenAI partnership tensions spill over into workplace policies The partnership tensions have reached a critical point where OpenAI is considering acquiring Windsurf, a competitor to Microsoft's GitHub Copilot, but Microsoft's existing deal would grant it access to Windsurf's intellectual property, creating an impasse that neither OpenAI nor Windsurf wants, sources familiar with the talks told Business Insider. Microsoft allows employees to use some external AI tools that meet security requirements, including coding assistant Replit. However, the company wants workers building AI products to better understand their own tools while driving broader internal usage. Some Microsoft teams are considering adding formal AI usage metrics to performance reviews for the next fiscal year, Business Insider learned from people familiar with the plans. The initiative reflects Microsoft's broader strategy to ensure its workforce embraces AI tools as competition heats up. Liuson emphasized that AI usage "should be part of your holistic reflections on an individual's performance and impact," treating it like other core workplace skills such as collaboration and data-driven thinking. The move signals how AI adoption has become essential to Microsoft's competitive positioning amid evolving partnerships and market pressures.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store