logo
Robinhood Is Using AI to Generate Half of All New Code

Robinhood Is Using AI to Generate Half of All New Code

Entrepreneur3 days ago
Robinhood co-founder and CEO, Vlad Tenev, says almost all engineers at the company have adopted AI coding tools.
Engineers at the stock trading and investing app Robinhood are using AI to generate new code instead of writing it themselves.
Robinhood CEO Vlad Tenev said on the 20VC podcast earlier this week that "close to 100%" of software engineers at the company are using AI to write blocks of code, tapping into tools like Cursor and Windsurf, which advertise advanced coding, debugging, and editing capabilities. According to Tenev, over 50% of new code at Robinhood is AI-generated, the same percentage as Salesforce.
Related: Robinhood Is Offering a Credit Card for the First Time — and It's Available in 10-Karat Gold
Tenev said that it was difficult to differentiate between AI-written code and human-created code, estimating that only a "minority" of new code at Robinhood was now written by humans.
"It's hard to even determine what the human-generated code is," Tenev said on the podcast. "If I had to guess, it's in the minority."
Meanwhile, Google CEO Sundar Pichai and Microsoft CEO Satya Nadella have individually stated that AI writes 30% of the code at their respective companies, putting Robinhood's AI coding adoption ahead of those big tech companies.
Robinhood CEO Vlad Tenev. Photo byfor Breakthrough Prize
Tenev also said on the podcast that AI has had a "huge" impact on Robinhood internally, affecting teams like customer support. For example, Robinhood built its own version of ChatGPT for customer service.
"The impact that it's had on internal teams, ranging from software engineering to customer support, the really big internal teams, has been huge," Tenev said on the podcast.
Related: OpenAI Blasts Robinhood for Selling OpenAI Tokens: 'We Do Not Endorse It'
Robinhood has more than quadrupled its market capitalization in the past eight months, from $21 billion in November to about $90 billion at the time of writing. In 2024, the company achieved total net revenue of $2.95 billion, up 58% year-over-year.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Analyst Says Amazon.com (AMZN) Cloud Business Needs to Show ‘Acceleration' for Stock Outperformance
Analyst Says Amazon.com (AMZN) Cloud Business Needs to Show ‘Acceleration' for Stock Outperformance

Yahoo

time25 minutes ago

  • Yahoo

Analyst Says Amazon.com (AMZN) Cloud Business Needs to Show ‘Acceleration' for Stock Outperformance

Inc (NASDAQ:AMZN) is one of the . Mark Mahaney, head of internet research at Evercore ISI, recently said Amazon needs to show further AWS growth for stock outperformance. 'The retail business is important for Inc (NASDAQ:AMZN). It's a necessary condition. I think for the stock to really outperform though, it will be the cloud business. You need to see acceleration in that in the back half of the year. I think we're going to see that. If we're wrong on that, the stock's not going to outperform from here. The retail business also needs to show this continued expansion in margins. And you know the—I know we've sort of waxed off and on and now we're off about tariff risk, but it's still there and you know, Inc (NASDAQ:AMZN) need—and Amazon's kind of the canary in the coal mine. Shoot, they may be the whole coal mine. I mean they're going to give us a read into, and we're going to be tracking pricing, for prices on products on Inc (NASDAQ:AMZN) and, you know, not these four days but as we go through the back half of the year and, you know, there is risk here.' AWS revenue jumped 16.9% year over year in the last reported quarter, while its operating income rose 22.6%. AWS has now surpassed a $100 billion annual run rate, playing a central role in helping businesses modernize infrastructure, reduce costs, and accelerate innovation. Ttatty / The market often overlooks Amazon's ads business, which is generating more than $10 billion in quarterly revenue despite being built from scratch. In the first quarter, ad revenue rose 19% from a year earlier to $13.9 billion, continuing to support overall profitability. According to some Wall Street estimates, Amazon is projected to earn $6.20 per share in 2025 and $8.95 in 2027, reflecting 44.4% earnings growth over two years. Lakehouse Global Growth Fund stated the following regarding Inc. (NASDAQ:AMZN) in its May 2025 investor letter: ' Inc. (NASDAQ:AMZN) reported a solid quarterly result with net sales up 9% year-on-year (10% in constant currency terms) to $155.7 billion and operating profit up 20% to $18.4 billion. The company's core e-commerce business remained resilient in the face of potential tariffs, with management noting they hadn't seen any material change in consumer buying behaviour as at the end of April. Amazon web services (AWS) grew 17% to $29.3 billion which was a slight deceleration from the 19% delivered last quarter. Whilst this seems disappointing at first blush, management reiterated that demand is very strong they are still capacity constrained. Artificial intelligence (AI) continues to be a key growth driver with AI workloads growing in excess of 100% year-on-year on AWS. Overall, it was a positive result, and we remain confident that the company is set to deliver many years of solid revenue growth and margin expansion.' While we acknowledge the potential of AMZN as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey. Sign in to access your portfolio

Why Machines Aren't Intelligent
Why Machines Aren't Intelligent

Forbes

time27 minutes ago

  • Forbes

Why Machines Aren't Intelligent

Abstract painting of man versus machine, cubism style artwork. Original acrylic painting on canvas. OpenAI has announced that its latest experimental reasoning LLM, referred to internally as the 'IMO gold LLM', has achieved gold‑medal level performance at the 2025 International Mathematical Olympiad (IMO). Unlike specialized systems like DeepMind's AlphaGeometry, this is a reasoning LLM, built with reinforcement learning and scaled inference, not a math-only engine. As OpenAI researcher Noam Brown put it, the model showed 'a new level of sustained creative thinking' required for multi-hour problem-solving. CEO Sam Altman said this achievement marks 'a dream… a key step toward general intelligence', and that such a model won't be generally available for months. Undoubtedly, machines are becoming exceptionally proficient at narrowly defined, high-performance cognitive tasks. This includes mathematical reasoning, formal proof construction, symbolic manipulation, code generation, and formal logic. Their capabilities also extend significantly to computer vision, complex data analysis, language processing, and strategic problem-solving, because of significant advancements in deep learning architectures (such as transformers and convolutional neural networks), the availability of vast datasets for training, substantial increases in computational power, and sophisticated algorithmic optimization techniques that enable these systems to identify intricate patterns and correlations within data at an unprecedented scale and speed. These systems can accomplish sustained multi-step reasoning, generate fluent human-like responses, and perform under expert-level constraints similar to humans. With all this, and a bit of enthusiasm, we might be tempted to think that this means machines are becoming incredibly intelligent, incredibly quickly. Yet this would be a mistake. Because being good at mathematics, formal proof construction, symbolic manipulation, code generation, formal logic, computer vision, complex data analysis, language processing, and strategic problem-solving, is neither a necessary nor a sufficient condition for 'intelligence', let alone for incredible intelligence. The fundamental distinction lies in several key characteristics that machines demonstrably lack. Machines cannot seamlessly transfer knowledge or adapt their capabilities to entirely novel, unforeseen problems or contexts without significant re-engineering or retraining. They are inherently specialized. They are proficient at tasks within their pre-defined scope and their impressive performance is confined to the specific domains and types of data on which they have been extensively trained. This contrasts sharply with the human capacity for flexible learning and adaptation across a vast and unpredictable array of situations. Machines do not possess the capacity to genuinely experience or comprehend emotions, nor can they truly interpret the nuanced mental states, intentions, or feelings of others (often referred to as "theory of mind"). Their "empathetic" or "socially aware" responses are sophisticated statistical patterns learned from vast datasets of human interaction, not a reflection of genuine subjective experience, emotional resonance, or an understanding of human affect. Machines lack self-awareness and the ability for introspection. They do not reflect on their own internal processes, motivations, or the nature of their "knowledge." Their operations are algorithmic and data-driven; they do not possess a subjective "self" that can ponder its own existence, learn from its own mistakes through conscious reflection, or develop a personal narrative. Machines do not exhibit genuine intentionality, innate curiosity, or the capacity for autonomous goal-setting driven by internal desires, values, or motivations. They operate purely based on programmed objectives and the data inputs they receive. Their "goals" are externally imposed by their human creators, rather than emerging from an internal drive or will. Machines lack the direct, lived, and felt experience that comes from having a physical body interacting with and perceiving the environment. This embodied experience is crucial for developing common sense, intuitive physics, and a deep, non-abstracted understanding of the world. While machines can interact with and navigate the physical world through sensors and actuators, their "understanding" of reality is mediated by symbolic representations and data. Machines do not demonstrate genuine conceptual leaps, the ability to invent entirely new paradigms, or to break fundamental rules in a truly meaningful and original way that transcends their training data. Generative models can only produce novel combinations of existing data, Machines often struggle with true cause-and-effect reasoning. Even though they excel at identifying correlations and patterns, correlation is not causation. They can predict "what" is likely to happen based on past data, but their understanding of "why" is limited to statistical associations rather than deep mechanistic insight. Machines cannot learn complex concepts from just a few examples. While one-shot and few-shot learning have made progress in enabling machines to recognize new patterns or categories from limited data, they cannot learn genuinely complex, abstract concepts from just a few examples, unlike humans. Machines still typically require vast datasets for effective and nuanced training. And perhaps the most profound distinction, machines do not possess subjective experience, feelings, or awareness. They are not conscious entities. Only when a machine is capable of all (are at least most of) these characteristics, even at a relatively low level, could we then reasonably claim that machines are becoming 'intelligent', without exaggeration, misuse of the term, or mere fantasy. Therefore, while machines are incredibly powerful for specific cognitive functions, their capabilities are fundamentally different from the multifaceted, adaptable, self-aware, and experientially grounded nature of what intelligence is, particularly as manifested in humans. Their proficiency is a product of advanced computational design and data processing, not an indication of a nascent form of intelligence in machines. In fact, the term "artificial general intelligence" in AI discourse emerged in part to recover the meaning of "intelligence" after it had been diluted through overuse in describing machines that are not "intelligent" to clarify what these so-called "intelligent" machines still lack in order to really be, "intelligent". We all tend to oversimplify and the field of AI is contributing to the evolution of the meaning of 'intelligence,' making the term increasingly polysemous. That's part of the charm of language. And as AI stirs both real promise and real societal anxiety, it's also worth remembering that the intelligence of machines does not exist in any meaningful sense. The rapid advances in AI signal that it is beyond time to think about the impact we want and don't want AI to have on society. In doing so, this should not only allow, but actively encourage us to consider both AI's capacities and its limitations, making every effort not to confuse 'intelligence' (i.e. in its rich, general sense) with the narrow and task-specific behaviors machines are capable of simulating or exhibiting. While some are racing for Artificial General Intelligence (AGI), the question we should now be asking is not when they think they might succeed, but whether what they believe they could make happen truly makes sense civilisationally as something we should even aim to achieve, while defining where we draw the line on algorithmic transhumanism.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store