
ElevenLabs Eyes India as Strategic Growth Hub in the AI Voice Race
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
As the global market for AI voice generation continues its rapid expansion is projected to surge from USD 3.5 billion in 2023 to USD 21.7 billion by 2030. AI voice generator Unicorn, ElevenLabs is doubling down on India, recognising the country's unmatched scale in multilingual content consumption, tech talent, and AI adoption. The US-based startup, valued at USD 3.3 billion following its latest Series C funding round, is positioning India at the centre of its international strategy.
The company's public breakthrough in India came when it dubbed Prime Minister Narendra Modi's three-hour conversation with Lex Fridman from Hindi to English.
Why India?
"In many ways, India was always waiting for a solution like this," says Siddharth Srinivasan, GMT, India, ElevenLabs. "We are all natively bilingual or multilingual, and the internet penetration, content consumption, and developer community here are unmatched."
According to Srinivasan, India offers an ideal structural fit across five major user cohorts from consumers, creators, developers, startups, to enterprise AI users. He notes that India is now home to 2.5 to 3 million monetised content creators, a number that continues to shock even former YouTube executives like himself.
ElevenLabs operates as a SaaS platform with a two-pronged go-to-market strategy. The first is a self-serve model, allowing users to begin with a freemium tier and scale up to subscriptions starting at USD 5 (INR 400) per month. The second targets enterprises that require bespoke solutions, high-volume processing, and dedicated support.
This hybrid model has enabled the company to work across various industries. "From Pocket FM and Kuku FM in audio storytelling to social media influencers like Varun Maya. Our voice stack is helping them scale content faster and in multiple languages," says Srinivasan.
The startup is already working with leading Indian platforms such as Meesho, Apna, 99acres, and NoBroker, particularly in conversational AI and customer engagement workflows. "Some partners are using our tech to 3–4x their customer interaction scale, something they couldn't imagine doing manually," Srinivasan reveals.
In education, ElevenLabs is collaborating with startups like Supernova to create personalised, multilingual learning experiences through AI-powered conversational agents. "The promise of education technology has always been one-to-one learning. We are now able to fulfil that using voice AI," he adds.
On the cybersecurity challenges
Given the increasing misuse of voice cloning in phishing and disinformation campaigns, ElevenLabs claims to take a strict, layered approach to responsibility. "Moderation, accountability, and provenance are built into our system," says Srinivasan. He elaborates that cloning protected voices such as public figures is restricted through a "no-go voice" list, and cloning can only occur with direct consent via their "voice capture" system.
The company has also developed a speech classifier capable of identifying if a sample was generated on ElevenLabs with over 99 per cent precision. "We're working with industry standards on watermarking, detection, and are open to law enforcement partnerships," he says, pointing out that the company has successfully avoided misuse during recent US and Indian elections.
No government tie-ups yet, but social impact is on the radar
While ElevenLabs does not currently have formal partnerships with the Indian government, it is participating in NGO-led initiatives that use AI voice to support people with speech impairments. "We distribute the technology for free to those with vocal challenges, enabling them to express themselves," Srinivasan says.
He acknowledges the massive potential for collaboration in education and social policy, particularly under the IndiaAI mission. "We hope to have a meaningful role in that ecosystem," he adds.
Also, on the competition side, Srinivasan candidly explains, "If there's no competition, the space isn't worth being in." He says ElevenLabs differentiates itself through state-of-the-art research, a deeply user-centric product, and relentless execution. "Speed is the only real moat in AI," he states.
India was the first market the company expanded into outside the West, and it's likely to remain a priority. ElevenLabs already supports 11–12 Indian languages and aims to push that further with emotion-rich, dialect-sensitive outputs in its V3 (latest) models.
When asked about the future, Srinivasan is clear-eyed in ambition, "We think we are very well placed to be the voice of the Indic internet where content has no barrier and creativity knows no limit." He also hints at upcoming partnerships with Indian startups and research entities.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
31 minutes ago
- Yahoo
AI power demand poses global supply risks, says Hitachi Energy
The increasing electricity consumption by tech companies for AI training poses risks to global power supply stability, according to Hitachi Energy CEO Andreas Schierenbeck. In an interview with the Financial Times, Schierenbeck emphasised the need for government intervention to regulate the volatile power usage characteristic of AI operations. Schierenbeck highlighted that AI data centres exhibit significant fluctuations in power demand, unlike traditional office data centres. 'AI data centres are very, very different from these office data centres because they really spike up,' he told FT. 'If you start your AI algorithm to learn and give them data to digest, they're peaking in seconds and going up to 10 times what they have normally used.' He advocated for regulatory measures similar to those applied to other industries, such as notifying utilities before initiating high power-consuming operations. The International Energy Agency forecasts that data centre electricity usage will double to 945 terawatt-hours (TWh) by 2030, surpassing the current consumption of countries such as Japan. Countries such as Ireland and the Netherlands have already imposed restrictions on new data centre developments due to concerns over their impact on electricity networks. A US Department of Energy (DOE)-backed report, released in December last year, indicated that data centre power demand is projected to double or triple by 2028. The report, produced by Lawrence Berkeley National Laboratory, noted that total electricity usage by data centres jumped from 58TWh in 2014 to 176 TWh in 2023. Projections estimate this could rise further to between 325TWh and 580TWh by 2028. Analysts at consultancy company Rystad Energy told FT that AI's power demands could potentially stabilise electricity grids if tech companies set maximum power limits and align AI model training with periods of abundant renewable energy. Hitachi Energy is currently dealing with a global shortage of power transformers. Schierenbeck noted that addressing this shortage could take up to three years. "AI power demand poses global supply risks, says Hitachi Energy" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
42 minutes ago
- Yahoo
Huawei's AI Lab Fends Off Accusations It Copied Rival Models
(Bloomberg) -- Huawei Technologies Co.'s secretive AI research lab has pushed back against accusations it relied on rivals' models to develop its own Pangu platform, taking the unusual step of rebutting claims about its artificial intelligence efforts. Trump's Gilded Design Style May Be Gaudy. But Don't Call it 'Rococo.' Foreign Buyers Swoop on Cape Town Homes, Pricing Out Locals Are Tourists Ruining Europe? How Locals Are Pushing Back Massachusetts to Follow NYC in Making Landlords Pay Broker Fees NYC Commutes Resume After Midtown Bus Terminal Crash Chaos The Pangu Pro MoE is the world's first model of its kind to be trained on Ascend chips — Huawei's answer to Nvidia Corp.'s AI accelerators — the lab said in a WeChat post over the weekend. While the company employed open-source code — as is 'common practice' — Huawei said it respected intellectual property and stuck closely to licensing terms. This followed accusations posted on the coding platform Github that Pangu's source code contained uncredited material from key rivals. Huawei's response is rare for a company regarded as a standard-bearer for China's efforts to wean itself off foreign technology, in spheres from smartphones to semiconductors. In AI services, however, rivals such as DeepSeek and Alibaba Group Holding Ltd. have captured investors' and industry executives' attention. 'We strictly adhere to the requirements of open-source licenses, and clearly mark copyright statements in the relevant source files,' Noah's Ark, Huawei's research lab, said in the post. 'We welcome and look forward to in-depth and professional discussions on technical details with everyone in the open-source community.' A Huawei representative declined to comment beyond the statement issued by Noah's Ark. Founded in 2012 to spearhead research into advanced technologies at Huawei, Noah's Ark now mainly focuses on cutting-edge AI and areas such as data mining. Its offices span Shenzhen to Hong Kong and London. The accusations center around the Pangu Pro MoE (Mixture of Exerts) model. The company published its source code last week, which in turn spurred the creation of a GitHub group called 'HonestAGI' that debated its origins. The post was deleted days later. But on Sunday, another lengthy post appeared on the same platform. Entitled 'Pangu's Sorrow,' the unidentified author revived the allegations and also went into detail about how the team came under immense pressure to deliver, yet fell behind rivals. --With assistance from Newley Purnell, Luz Ding, Jessica Sui and Vlad Savov. For Brazil's Criminals, Coffee Beans Are the Target Sperm Freezing Is a New Hot Market for Startups SNAP Cuts in Big Tax Bill Will Hit a Lot of Trump Voters Too Pistachios Are Everywhere Right Now, Not Just in Dubai Chocolate China's Homegrown Jewelry Superstar ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
42 minutes ago
- Business Insider
I'm a university lecturer concerned that students are using AI to cheat. It's made my workload skyrocket, and I've had to make drastic changes.
This as-told-to essay is based on a transcribed conversation with Risa Morimoto, a senior lecturer in economics at SOAS University of London, in England. The following has been edited for length and clarity. Students always cheat. I've been a lecturer for 18 years, and I've dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I've experienced a significant change. There are definitely positive aspects to AI. It's much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays. However, I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments. AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them. I've decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can't stand still. Cheating has become harder to detect because of AI I've worked at SOAS University of London since 2012. My teaching focus is ecological economics. Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn't always correspond to their performance. I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well — until AI came along. Cheating used to be easier to spot. I'd maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles. Now, with more sophisticated AI technologies, it's harder to detect, and I believe the scale of cheating has increased. I'll read 100 essays and some of them will be very similar using identical case examples, that I've never taught. These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set. While students can use examples from internet sources in their work, I'm concerned that some students have just used AI to generate the essay content without reading or engaging with the original source. I started using AI detection tools to assess work, but I'm aware this technology has limitations. AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly. There's no obvious way to judge misconduct During the first lecture of my module, I'll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can't use it to generate responses to their assignments. SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays. Over the past year, I've sat on an academic misconduct panel at the university, dealing with students who've been flagged for inappropriate AI use across departments. I've seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses. It can be hard to make decisions because you can't be 100% sure from reading the essay whether it's AI-generated or not. It's also hard to draw a line between cheating and using AI to support learning. Next year, I'm going to dramatically change my assignment format My colleagues and I speak about the negative and positive aspects of AI, and we're aware that we still have a lot to learn about the technology ourselves. The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things. I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved. I'll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they'll create a blog, so they can translate what they've understood of the highly technical terms into a more communicable format. My aim is to make sure the assignments are directly tied to what we've learned in class and make assessments more personal and creative. The old assessment model, which involves memorizing facts and regurgitating them in exams, isn't useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking. In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that "uphold academic integrity." They said the university encouraged students to pursue work that is harder for AI to replicate and have "robust mechanisms" in place for investigating AI misuse. "The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes," the spokesperson added.