logo
Razorpay Joins Hands with MeitY Startup Hub to Boost 100 AI Startups

Razorpay Joins Hands with MeitY Startup Hub to Boost 100 AI Startups

Entrepreneur21-05-2025
Startups onboarded through MSH will gain complimentary access to Razorpay's suite of products—including its payment gateway, payout solutions, and corporate credit cards.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Fintech giant Razorpay has announced a strategic partnership with the MeitY Startup Hub (MSH) to nurture and scale 100 early-stage artificial intelligence (AI) startups over the next two years.
This initiative aims to empower startups with access to Razorpay's robust fintech infrastructure, expert mentorship, and vital networking opportunities.
"India's tech ecosystem has always been a cradle for innovation. Today, it is further strengthened by the capabilities of Artificial Intelligence," said Shashank Kumar, Managing Director and Co-founder of Razorpay. "The aim is to help young entrepreneurs unlock the potential of AI to solve real-world problems at scale."
Startups onboarded through MSH will gain complimentary access to Razorpay's suite of products—including its payment gateway, payout solutions, and corporate credit cards. Additionally, Razorpay will conduct workshops and one-on-one mentorship sessions focused on navigating regulatory challenges and building scalable, compliant fintech operations.
"The collaboration aims to identify, nurture, and support high-potential AI startups by equipping them with technical know-how, financial infrastructure, and mentorship from industry veterans," Razorpay said in a statement.
Arif Khan, Chief Innovation Officer at Razorpay, emphasized the mission's grassroots focus. "Most transformative ideas often begin in unlikely places, with a lone founder solving a real problem, fueled by belief more than resources. But turning belief into a business takes more than just passion; it needs the right infrastructure, the wisdom of those who've walked the path, and a community that has your back," he said. "This partnership with MeitY Startup Hub is about doing just that."
Over the past two years, Razorpay has supported over 450 startups through initiatives such as its flagship Rize programme. MSH, which functions under the Ministry of Electronics and Information Technology, collaborates with more than 400 incubators across India to promote entrepreneurship and technological advancement.
"At MeitY Startup Hub, we believe collaboration between government and industry is key to fostering innovation," said Jeet Vijay, CEO of MSH. "Through this partnership with Razorpay, we aim to nurture 100 AI-focused startups, providing them with the right tools and mentorship to thrive."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk's AI Praised Hitler. Now He Wants It to Teach Your Kids
Elon Musk's AI Praised Hitler. Now He Wants It to Teach Your Kids

Gizmodo

time8 minutes ago

  • Gizmodo

Elon Musk's AI Praised Hitler. Now He Wants It to Teach Your Kids

With Elon Musk, controversy and public relations campaigns often chase one another. He seems to like it that way. Just days after his Grok chatbot made headlines for generating antisemitic content and praise for the Nazis, the billionaire announced he wants the same AI to help raise your children. Elon Musk's latest AI announcement was not about building a more powerful, all knowing intelligence. Instead, it was about creating a smaller, safer one. 'We're going to make Baby Grok @xAI,' he posted on X (formerly Twitter) on July 20, adding, 'an app dedicated to kid friendly content.' We're going to make Baby Grok @xAI, an app dedicated to kid-friendly content — Elon Musk (@elonmusk) July 20, 2025He did not provide further details. Dubbed 'Baby Grok,' the new app promises a family friendly version of Musk's AI assistant, positioned as a learning and entertainment tool for children. But given Grok's troubled history and Musk's own combative approach to content moderation, how many parents would trust this new creation with their kids? Initial reactions to the announcement on X were overwhelmingly negative. 'Stop,' one user simply wrote. 'Bad idea. Children should be outside playing & daydreaming, not consuming AI slop,' another user reacted. A third user commented, 'Sounds like a horrible idea that can only go disastrously wrong.' Stop — The Dank Knight 🦇 (@capeandcowell) July 20, 2025Sounds like a horrible idea that can only go disastrously wrong. — Hazel Appleyard (@HazelAppleyard_) July 20, 2025The timing of the Baby Grok announcement appears to be no coincidence. Grok has been embroiled in a series of controversies. In early July, the chatbot sparked outrage for spouting antisemitic rhetoric and praising Adolf Hitler. A few days later, xAI released a new version, SuperGrok, which included a feature called 'Companions.' Users quickly complained that the avatars for these companions were overly sexualized and crossed a line. Praise and Addiction Fears: Musk's AI Girlfriend Sparks Fierce Debate On the surface, 'Baby Grok' is a logical product extension. But viewed against the backdrop of the controversies that have defined its adult version, the announcement looks less like a simple business expansion and more like a strategic and necessary pivot. This is Musk's redemption play, his attempt to sanitize a controversial AI by entrusting it with the most sensitive audience of all: children. The problem for Musk and xAI is that the original Grok, designed to be an edgy, humorous alternative to what he sees as overly 'woke' chatbots, has frequently stumbled. It has been criticized for its unpredictable nature, a tendency to generate biased or factually incorrect information, and an 'anti establishment' personality that can veer into inappropriate or conspiratorial territory. For many, Grok is seen not as a reliable source of knowledge but as a digital reflection of its creator's chaotic online persona; a powerful tool that lacks consistent guardrails. 'Baby Grok' is the proposed solution. By creating a walled garden of 'kid friendly content,' Musk is attempting to prove that his AI venture can be tamed and trusted. The move creates a compelling corporate narrative: after building a flawed and unruly AI for adults, the controversial tech mogul is now apparently turning his attention to protecting children, aiming to build a safe, educational tool that can win over skeptical parents. A successful 'Baby Grok' could rehabilitate the entire Grok brand, demonstrating that xAI can act responsibly. It would also provide an entry point into the immensely lucrative and influential market of children's education and technology, a space currently dominated by established players with far more family friendly reputations. The stakes of this venture are immense. By targeting children, Musk is voluntarily stepping into the most scrutinized arena of AI development. The conversation immediately shifts to pressing concerns about digital safety, data privacy, and the profound influence AI will have on the next generation's development. Can a company whose ethos is rooted in a maximalist interpretation of free speech truly build the filters and safeguards necessary to protect young minds? Parents will be asking whether the same company that champions unmoderated discourse can be trusted to curate a safe learning environment. When Google announced last May that it would roll out its AI chatbot Gemini for users under 13, a coalition of consumer advocates and child safety experts, including Fairplay and the Center for Online Safety, asked the company to suspend the decision. They cited the 'AI chatbot's unaddressed, significant risks to young children.' 'AI chatbots and other generative AI products pose increased risks to young children,' the coalition wrote in a letter to Google CEO Sundar Pichai. 'Children have difficulty understanding the difference between an AI chatbot and a human, and AI chatbots can easily trick a child into trusting it.' There are also broader concerns about privacy. xAI has not specified whether 'Baby Grok' will collect or retain usage data from child users, or what kind of parental controls will be in place. For a generation of parents already uneasy about screen time and algorithmic influence, the idea of letting 'Baby Grok' interact with a child may be a hard sell no matter how sanitized the content. There is also the question of tone. Musk's personal brand, often combative, cynical, and steeped in internet irony, seems at odds with the kind of earnest, trustworthy image required for educational children's tech. If Grok was born as a kind of Reddit troll in chatbot form, can 'Baby Grok' convincingly play the role of Big Bird? This effort puts Musk's xAI at the center of one of the tech industry's biggest challenges: making powerful AI technology safe and beneficial for society. 'Baby Grok' is more than just an app; it is a public test case for xAI's commitment to responsibility. A success could redefine the company's image and build a foundation of trust. A failure, however, would be catastrophic, not only confirming the worst fears about Grok but also damaging the public's already fragile trust in the role of AI in our daily lives. Ultimately, the launch of 'Baby Grok' is a high risk, high reward gamble. It is an attempt to solve a PR problem with a product, betting that a safe haven for kids can make the chaotic world of adult AI seem more manageable. The world will be watching to see if this is the unlikely beginning of a more responsible chapter for Musk's AI ambitions, or simply another disaster waiting to happen.

Bloomberg Surveillance TV: July 21st, 2025
Bloomberg Surveillance TV: July 21st, 2025

Bloomberg

time8 minutes ago

  • Bloomberg

Bloomberg Surveillance TV: July 21st, 2025

- Chris Harvey, Head: Equity Strategy at Wells Fargo - Angelo Zino, Head: Technology at CFRA - Charles Myers, Chairman and founder at Signum Global Advisors - Claudia Sahm, Chief Economist at New Century Advisors Chris Harvey, Head: Equity Strategy at Wells Fargo, talks about market bullishness and whether his S&P target could change should the president fire Fed Chair Jay Powell. Angelo Zino, Head: Technology at CFRA, joins to discuss Big Tech's AI investment and talent poaching as well as the outlook for the broader tech sector. Charles Myers, Chairman and founder at Signum Global Advisors, discusses Fed independence and how a less independent Fed could reshape markets. Claudia Sahm, Chief Economist at New Century Advisors, on US labor and inflation as well as Jay Powell's future as Fed Chair.

How Enterprise Legal Teams Can Track The ROI Of Their AI Tools
How Enterprise Legal Teams Can Track The ROI Of Their AI Tools

Forbes

time9 minutes ago

  • Forbes

How Enterprise Legal Teams Can Track The ROI Of Their AI Tools

Mark Doble is the CEO of Alexi, an AI-powered litigation platform. Enterprise legal teams are among the most active adopters of AI in the legal sector, particularly when it comes to streamlining high-volume, time-intensive tasks. As adoption evolves, many are moving away from general-purpose platforms toward specialized, self-hosted tools that are better suited to legal workflows. This transition has raised the question: How can teams measure the return on investment (ROI) of their AI tools? With over half of legal teams (53%) citing implementation costs as their top barrier to AI adoption, ROI isn't just a performance metric; it's a strategic necessity. While quantifying ROI is complicated, failing to track it will cost more than any AI investment ever could. The real differentiator is how effectively teams measure impact in cost savings, legal output, talent strategies and strategic value. Establish a pre-AI baseline. Tracking ROI begins with identifying a reliable benchmark. Legal departments should assess operational metrics over a one- to three-month period before AI implementation. Useful data points include: • Time spent on tasks such as document review or legal research • Frequency of rework due to human error • Operational costs • Client satisfaction and delivery timelines These figures provide firms with the necessary context for understanding post-implementation gains or inefficiencies. What should you measure once AI is in place? The most tangible gains often come from reassigning talent away from repetitive tasks and toward higher-value work. Improvements in productivity, case throughput or reduced cycle times can be strong indicators of positive ROI. What defines 'better' will vary by organization. For some, it's the ability to take on more matters. For others, it's a higher win rate, better client satisfaction scores or more consistent legal arguments across teams. AI capabilities like pattern recognition or citation analysis will help reduce variability and improve decision-making. Every leader knows that a single data breach can obliterate years of work that went into building trust and rapport with clients, your team and other stakeholders. While direct ROI from reduced risk is harder to quantify, tracking near misses, audit outcomes or compliance lapses pre- and post-adoption can help demonstrate impact. Tools deployed in private environments with limited external dependencies are often better positioned to mitigate exposure. Leverage data to drive continuous improvement. It's not enough to hope your team is using AI well. You need data to confirm it. Modern legal AI platforms increasingly come equipped with analytics dashboards that can provide insight into: • Real-time spend and matter progress • Usage rates for specific features and automations • Comparisons against internal benchmarks or industry norms Analyzing this data helps legal teams adjust workflows, identify underused features and inform future investments. Take a long-term view. Implementing an AI tool isn't just a one-and-done strategy. These systems are designed to evolve with your organization's needs over time. That means ROI tracking should extend beyond short-term metrics. Periodic surveys of user satisfaction, perceived workload reduction and adaptability to evolving workflows can help leaders assess whether their tools are still aligned with business goals. Key questions to ask include: • Can the system adapt as your workflows evolve? • Can it remain embedded in the tools your team already uses? • Does it support your long-term security and compliance priorities? Surveys measuring time saved, lawyer satisfaction and perceived workload post-adoption can help translate these broader benefits into tangible ROI. And like any strategic initiative, tracking impact shouldn't focus on just the tool itself, but how your firm uses and grows with it as well. Don't just adopt AI. Justify it. Legal departments are under pressure to show that AI investments deliver more than just efficiency gains. If your team is piloting or scaling an AI platform, success starts with clarity on metrics, stakeholder alignment and iteration cycles. The best-performing firms in this AI era won't just adopt tools; they'll learn how to measure and evolve with them. AI isn't just about doing things faster. It's about working smarter, with more visibility and precision. Measure that well, and you won't just justify the investment—you'll make it indispensable. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store