
Inside IvyCap's Tech Playbook
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
A homegrown venture capital firm, and at the helm of it, founder and managing partner, Vikram Gupta, whose tech-focused investment thesis is making careful bets in artificial intelligence (AI), healthtech, and other deep tech sectors. The firm's origin is unique: a venture capital fund backed by Indian institutional money and a strong IIT alumni trust.
Through this, IvyCap is scaling up capital, with the idea to invest in very early-stage startups.
"And these new age technologies, which are in very early stages in their TRL (technology readiness level) 1, 2, 3 stages, to helping them build a very unique kind of technology stacks. And we've been quite fortunate to have made a lot of room and a lot of progress there," says Gupta.
Vikram says the idea is to help provide grants to such disruptive technologies. The firm is going into the deep pockets of all the IITs, IIMs, ISB, Indian Institute of Science, and others, leveraging these technologies and helping build centers of excellence. "That's how we can spot these technologies early on and even fund them for commercialization or even scale up."
The VC firm sees huge opportunities across deep tech, emerging tech areas. According to Gupta, there are quite a few examples of disruptions, for example, AI being an overarching theme has multiple areas, including vertical AI, horizontal AI, and infrastructure AI.
"Vertical AI is catering to sectors such as financial services, health care, or insurance. That is a unique opportunity building up. Horizontal AI, on the other hand, is building a lot of agentic AI tools and other AI disruptive technologies, which are catering to various models. And the infrastructure AI is working towards creating a lot of hardware support systems, like semiconductors, and setting up data centers and other things, which support the AI processing. So we are investing across all three areas," says Gupta.
Gupta also adds that the firm is looking at opportunities across areas such as space tech, defence tech, IoT devices, and other similar hardware technologies. "In addition, there are opportunities in blockchain and other areas as well. So we are going deeper in each of these areas through our collaborations with the IoTs and identifying specific talent across each of these verticals and horizontals."
Gupta believes that India is sitting at a very unique place to leverage these areas, and a lot of talent is now getting involved. "And with our funding and a lot of grant capital that we are building as a large pool of capital are likely to build this further."
However, it is not easy to catch these technology trends early on, with the firm looking at people involved in solving specific problems. "We look for the passion they have in driving these technologies."
"We also look at the specific business models being targeted and the understanding of the commercial side in terms of the problem getting solved. And I think some of these ideas are very disruptive. So when they're very disruptive, the risk is also very high."
Sometimes, these bets could become capital-intensive over a period of time, with the firm having to look at various factors.
"But if you were to break it down across different buckets, so one bucket is the bucket of the entrepreneur and the team, which is looking at their backgrounds. The second bucket is the business model, geographical spread, etc. There are all those kinds of things that we evaluate from a business model perspective. And the third piece is about the scale potential in terms of how large this can be," says Gupta.
Factsheet:
Corpus size: INR 5,000 crore
Portfolio: 55 Companies
Dragon Exits: Purplle at INR 330 Cr

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
18 minutes ago
- Fast Company
Why your brain matters more than ever in the AI age
It can be enthralling to watch artificial intelligence models progress toward a mastery of deep learning. But are we as equally invested in our own abilities to think and learn? The human capacity to think deeply, find meaning, and apply wisdom is what makes us unique. Yet, it is increasingly tempting and easy to rely on the fast, accessible answers that AI provides. In a recent McKinsey study of organizations that use generative AI, only 27% said that employees review all content created by gen AI before it is used. One-third of respondents said that only 20% or less of gen-AI-produced content is checked before use. The antidote in this moment is critical thinking. Critical thinking is sometimes called 'careful thinking,' as it involves questioning, interpretation, and discernment. Critical thinking is not always our default mode, and it's already under siege from frequent AI usage. However, critical thinking skills can be taught. Moreover, according to our latest research, leaders with strong critical thinking skills have better outcomes, such as confidence in their ability to lead and lower burnout. Thinking Slow or Not at All Whether it's a matter of being lazy or economical, humans don't think a lot if we don't have to. This isn't necessarily a bad thing. Researchers estimate that our conscious brains process information at a rate of 10 bits per second. (AI models process data at trillions of bits per second.) So, we conserve our limited mental horsepower for complex tasks rather than 'wasting' it on simple or repetitive tasks. This is why we go into autopilot mode when we drive familiar routes or rely on mental shortcuts to make decisions. (For example, we are prone to judging a person's trustworthiness based on appearance instead of interactions.) Our slow brains have a new, fast friend called AI. That's a good thing, right? It can be. AI can rapidly process vast amounts of information, recognize patterns that lie beyond human reach, and provoke us to consider new angles. AI-based tools will expand our understanding of business performance, team dynamics, market trends, and customer sentiment. But our new friend can also exacerbate our tendency for cognitive laziness. Remember those mental shortcuts we take? In one shortcut, we overtrust answers from automated systems and don't pay attention to contradictory information, even if it's correct. As AI tools become even smarter and slicker—and answers are delivered in highly confident tones—this automation bias can grow. The downside to all of this is the risk of losing one's own capacity for thinking, learning, and reasoning. Guillaume Delacour, global head of people development at ABB, a technology leader in electrification and automation, spoke to us about the importance of critical thinking for leaders in the age of AI. 'One of the big benefits of AI is that it always has an answer—but this is also a major challenge,' he noted. 'It can be too easy to accept the outcomes it generates. Good leaders have always needed critical thinking, but in our AI-enabled workplace, where every question has an instant answer, this skill is even more important.' Are You a Strong Thinker? Critical thinking is the ability to evaluate situations objectively and make informed, well-reasoned decisions. It requires us to consider biases, question assumptions, and incorporate multiple perspectives. With critical thinking, it's like your brain is doing a workout rather than just lounging on the couch. And, like a physical workout, critical thinking requires discipline, self-awareness, and effort. But the payoff is pretty significant. We recently assessed 227 leaders on their level of critical thinking and divided the group into high and low critical thinkers. We assessed how well each group is likely to operate in the new world of AI, as well as their overall experience as a leader. The differences are striking. Leaders Who Don't Think Will Struggle In a world in which answers can come fast and easily, leaders who score low on critical thinking are at greater risk of letting machines do the thinking for them and becoming increasingly less sharp. · Low critical thinkers are 18% more likely to have confirmation bias than high critical thinkers. Confirmation bias is the tendency to look for or favor information that confirms our existing beliefs. · Low critical thinkers are 32% more likely to over-rely on gen AI for answers. · Low critical thinkers are 36% more likely to demonstrate cognitive failures. Cognitive failures are everyday lapses in memory or functioning during situations we normally are on top of, such as forgetting where you put the car keys. Leaders Who Think Will Thrive Strong critical thinkers have a protective shield against the threats of AI. Critical thinking balances the pull toward cognitive laziness and guards against our natural tendencies to accept and rely on what AI tells us. Moreover, these thinkers have a better experience as a leader. · High critical thinkers rate themselves 14% higher than low critical thinkers on their ability to perform well in their roles. · High critical thinkers rate themselves 13% higher than low critical thinkers on their ability to lead others effectively. · High critical thinkers rate themselves 10% higher than low critical thinkers on their ability to lead confidently into the future. Additionally, high critical thinkers report 21% less burnout in their roles and 16% higher job satisfaction. In important ways, thinking can be a secret weapon for leaders, enabling them to be better at and happier in their jobs. Strengthening Your Thinking Muscle The encouraging news for leaders is that critical thinking is not a 'you have it, or you don't' proposition. Each of us can be a critical thinker, but we need to intentionally rewire our relationship to thinking in order to cultivate this vital leadership skill. Here are a few things to try. Think about your thinking. In the course of a day or week, try taking a mental step back to observe how you think. You could ask yourself questions such as: · What is a belief or assumption that I questioned? · Did I change my mind about something important? · Did I avoid any information because it challenged me? · Did I feel uncomfortable in any ambiguous situations? The underlying skill you are practicing here is the ability to observe how you think and to discern what may be influencing your thoughts. Is there a past experience or possible bias that is playing a role? How much does stress or the need for speed factor in? Practice 'why' questions. When looking at a situation, ask yourself why it happened, why it matters, and/or why a particular conclusion was reached. This habit encourages 'second looks' and slows us down to uncover underlying assumptions, potential biases, and hidden logic. This approach not only deepens our understanding but also stretches our ability to evaluate information from multiple perspectives. Make AI your thinking partner. If we are not careful, our predisposition to cognitive laziness will drive us to pick the fast answers that come from AI models versus the deeper mental workout that comes from wrestling with complex ideas or considering underlying assumptions. But that doesn't mean AI can't play a role. When used well, AI tools can be very effective critical thinking coaches, nudging us to consider new angles or refine our arguments. Always make sure you challenge AI by asking questions such as: How did you come up with that result? Why should I believe that what you are suggesting is correct? What questions should I ask to improve my critical thinking? Bigger Comprehension Thinking has always set humans apart—something to be taught, mastered, and celebrated. In 1914, IBM founder Thomas J. Watson declared 'THINK' as the mantra for the struggling machine organization, saying ''I don't think' has cost the world millions of dollars.' We have arrived now at an incredible point when machines can think and learn in ways far surpassing human abilities. There are benefits to this—ways in which AI can make us all smarter. The key is to stay alert and grounded in what is uniquely human: the ability to examine an answer with clarity, to grasp what's around and underneath it, and to connect it to a bigger comprehension of the world around us.


Fast Company
22 minutes ago
- Fast Company
These two game-changing breakthroughs advance us toward artificial general intelligence
The biggest technology game changers don't always grab the biggest headlines. Two emerging AI developments may not go viral on TikTok or YouTube, but they represent an inflection point that could radically accelerate the development of artificial general intelligence (AGI). That's AI that can function and learn like us. Coming to our senses: WildFusion As humans, we rely on all sorts of stimuli to navigate in the world, including our senses: sight, sound, touch, taste, smell. Until now, AI devices have been solely reliant on a single sense—visual impressions. Brand-new research from Duke University goes beyond reliance only on visual perception. It's called WildFusion, combining vision with touch and vibration. The four-legged robot used by the research team includes microphones and tactile sensors in addition to the standard cameras commonly found in state-of-the-art robots. The WildFusion robot can use sound to assess the quality of a surface (dry leaves, wet sand) as well as pressure and resistance to calibrate its balance and stability. All of this data is gathered and combined or fused, into a single data representation that improves over time with experience. The research team plans enhance the robot's capabilities by enabling it to gauge things like heat and humidity. As the types of data used to interact with the environment become richer and more integrated, AI moves inexorably closer to true AGI. Learning to learn The second underreported AI technology game changer comes from researchers at the universities of Surrey and Hamburg. While still in the early stages of development, this breakthrough allows robots that interact socially with humans (social robots) to train themselves with minimal human intervention. It achieves this by replicating what humans would visually focus on in complex social situations. For example, we learn over time as humans to look at a person's face when talking to them or to look at what they are pointing to rather than at their feet or off into space. But robots won't do that without being specifically trained. Until now, the training to refine behavior in robots was primarily reliant on constant human monitoring and supervision. This new innovative approach uses robotic simulations to track, monitor, and importantly, improve the quality of the robot interactions with minimal human involvement. Robots learn social skills without constant human oversight. This marks an important step forward in the overall advancement of social robotics and could prove to be a huge AGI accelerator. Self-teaching AI could lead to advancements at an exponential rate, a prospect some of us view as thrilling, others as chilling. AI signal over noise Amazing as they may be to watch, dancing humanoid robots and mechanical dogs can be characterized as narrow AI—AI designed only for a specific task or purpose. The feats of these purpose-built tools are impressive. But these two new developments advance how AI experiences the world and how it learns from those experiences. They will dramatically change how technology exists (and coexists with us) in the world. Taken together, these breakthroughs and the work of other researchers and entrepreneurs along similar paths are resetting the trajectory and the timetable for achieving AGI. This could mark the tipping point that turns the slow march toward AGI into an all-out run.


WIRED
28 minutes ago
- WIRED
The AI Backlash Keeps Growing Stronger
As generative artificial intelligence tools continue to proliferate, pushback against the technology and its negative impacts grows stronger. Photo-Illustration:Before Duolingo wiped its videos from TikTok and Instagram in mid-May, social media engagement was one of the language-learning app's most recognizable qualities. Its green owl mascot had gone viral multiple times and was well known to younger users—a success story other marketers envied. But, when news got out that Duolingo was making the switch to become an 'AI-first' company, planning to replace contractors who work on tasks generative AI could automate, public perception of the brand soured. Young people started posting on social media about how they were outraged at Duolingo as they performatively deleted the app—even if it meant losing the precious streak awards they earned through continued, daily usage. The comments on Duolingo's TikTok posts in the days after the announcement were filled with rage, primarily focused on a single aspect: workers being replaced with automation. The negative response online is indicative of a larger trend: Right now, though a growing number of Americans use ChatGPT, many people are sick of AI's encroachment into their lives and are ready to fight back. When reached for comment, Duolingo spokesperson Sam Dalsimer stressed that 'AI isn't replacing our staff' and said all AI-generated content on the platform would be created 'under the direction and guidance of our learning experts.' The company's plan is still to reduce its use of non-staff contractors for tasks that can be automated using generative AI. Duolingo's embrace of workplace automation is part of a broad shift within the tech industry. Leaders at Klarna, a buy now, pay later service, and Salesforce, a software company, have also made sweeping statements about AI reducing the need for new hires in roles like customer service and engineering. These decisions were being made at the same time as developers sold 'agents,' which are designed to automate software tasks, as a way to reduce the amount of workers needed to complete certain tasks. Still, the potential threat of bosses attempting to replace human workers with AI agents is just one of many compounding reasons people are critical of generative AI. Add that to the error-ridden outputs, the environmental damage, the potential mental health impacts for users, and the concerns about copyright violations when AI tools are trained on existing works. Many people were initially in awe of ChatGPT and other generative AI tools when they first arrived in late 2022. You could make a cartoon of a duck riding a motorcycle! But soon artists started speaking out, noting that their visual and textual works were being scraped to train these systems. The pushback from the creative community ramped up during the 2023 Hollywood writer's strike, and continued to accelerate through the current wave of copyright lawsuits brought by publishers, creatives, and Hollywood studios. Right now, the general vibe aligns even more with the side of impacted workers. 'I think there is a new sort of ambient animosity towards the AI systems,' says Brian Merchant, former WIRED contributor and author of Blood in the Machine , a book about the Luddites rebelling against worker-replacing technology. 'AI companies have speedrun the Silicon Valley trajectory.' Before ChatGPT's release, around 38 percent of US adults were more concerned than excited about increased AI usage in daily life, according to the Pew Research Center. The number shot up to 52 percent by late 2023, as the public reacted to the speedy spread of generative AI. The level of concern has hovered around that same threshold ever since. Ethical AI researchers have long warned about the potential negative impacts of this technology. The amplification of harmful stereotypes, increased environmental pollution, and potential displacement of workers are all widely researched and reported. These concerns were often previously reserved to academic discourse and online leftists paying attention to labor issues. As AI outputs continued to proliferate, so did the cutting jokes. Alex Hanna, coauthor of The AI Con and director of research at the Distributed AI Research Institute, mentions how people have been 'trolling' in the comment sections of YouTube Shorts and Instagram Reels whenever they see AI-generated content in their feeds. 'I've seen this on the web for a while,' she says. This generalized animosity towards AI has not abated over time. Rather, it's metastasized. LinkedIn users have complained about being constantly prompted with AI-generated questions. Spotify listeners have been frustrated to hear AI-generated podcasts recapping their top-listened songs. Reddit posters have been upset to see AI-generated images on their microwavable noodles at the grocery store. Tensions are so high that even the suspicion of AI usage is now enough to draw criticism. I wouldn't be surprised if social media users screenshotted the em dashes in this piece—a supposed giveaway of AI-generated text outputs—and cast suspicions about whether I used a chatbot to spin up sections of the article. A few days after I first contacted Duolingo for comment, the company hid all of its social media videos on TikTok and Instagram. But, soon the green owl was back online with a satirical post about conspiracy theories. 'I've had it with the CEOs and those in power. It's time we show them who's in charge,' said a person wearing a three-eyed Duolingo mask. The video uploaded right afterwards was a direct message from the company's CEO attempting to explain how humans would still be working at Duolingo, but AI could help them produce more language learning courses. While the videos got millions of views on TikTok, the top comments continued to criticize Duolingo for AI-enabled automation: 'Keep in mind they are still using AI for their lessons, this doesn't change anything.' This frustration over AI's steady creep has breached the container of social media and started manifesting more in the real world. Parents I talk to are concerned about AI use impacting their child's mental health. Couples are worried about chatbot addictions driving a wedge in their relationships. Rural communities are incensed that the newly built data centers required to power these AI tools are kept humming by generators that burn fossil fuels, polluting their air, water, and soil. As a whole, the benefits of AI seem esoteric and underwhelming while the harms feel transformative and immediate. Unlike the dawn of the internet where democratized access to information empowered everyday people in unique, surprising ways, the generative AI era has been defined by half-baked software releases and threats of AI replacing human workers, especially for recent college graduates looking to find entry-level work. 'Our innovation ecosystem in the 20th century was about making opportunities for human flourishing more accessible,' says Shannon Vallor, a technology philosopher at the Edinburgh Futures Institute and author of The AI Mirror , a book about reclaiming human agency from algorithms. 'Now, we have an era of innovation where the greatest opportunities the technology creates are for those already enjoying a disproportionate share of strengths and resources.' Not only are the rich getting richer during the AI era, but many of the technology's harms are falling on people of color and other marginalized communities. 'Data centers are being located in these really poor areas that tend to be more heavily Black and brown,' Hanna says. She points out how locals have not just been fighting back online, but have also been organizing even more in-person to protect their communities from environmental pollution. We saw this in Memphis, Tennessee, recently, where Elon Musk's artificial intelligence company xAI is building a large data center with over 30 methane-gas-powered generators that are spewing harmful exhaust. The impacts of generative AI on the workforce are another core issue that critics are organizing around. 'Workers are more intuitive than a lot of the pundit class gives them credit for,' says Merchant. 'They know this has been a naked attempt to get rid of people.' The next major shift in public opinion will likely follow previous patterns, occurring when broad swaths of workers feel further threatened and organize in response. And this time, the in-person protests may be just as big as the online backlash.