logo
CarGurus created a working group for AI experimentation. Employees are buying in.

CarGurus created a working group for AI experimentation. Employees are buying in.

CarGurus launched a group tasked with helping employees explore and adopt AI tools.
Sarah Rich, a lead coordinator of the group, says the effort helps people use AI more effectively.
This article is part of " Culture of Innovation," a series on how businesses can prompt better ideas.
There's no shortage of hype around the potential for AI to transform the workplace. A recent McKinsey report compared the tech to the birth of the internet and the arrival of the steam engine.
But, its reality is still taking shape. AI adoption is inconsistent at most organizations, workers have varying levels of interest, and there's often a difference between AI buzz and its practical application.
CarGurus, an online marketplace for buying and selling cars, is one company trying to bridge that divide. Last October, it launched AI Forward, a 20-person working group that brings together leaders across departments, including product, engineering, legal, and sales. The group's goal is to identify the right applications for AI, evaluate potential tools, and encourage employee experimentation through workshops, one-on-one guidance, and pilot programs.
"If everyone has to figure out AI tools on their own, we risk losing interest," said Sarah Rich, a senior principal data scientist at CarGurus and a lead coordinator of AI Forward. "We're trying to offer cheat sheets and share what's working."
She added that once employees see how AI can make their day-to-day more efficient or offer new approaches, they tend to get on board. "We want to make sure that when we ask people to invest time in AI, they're going to quickly see a reward."
Rich spoke with Business Insider about how AI Forward is helping employees gain the confidence to explore the technology.
The following has been edited for clarity and length.
Business Insider: What was the reason for AI Forward?
Sarah Rich: There's a lot of pressure to get ahead with AI. And I imagine this is the case at many companies — there's a sense that if you don't keep up, you're leaving innovation on the table. At the same time, there's a gap between the excitement around AI and understanding what it means for each role.
We started AI Forward to meet every business unit and function where they are. The group works together to evaluate use cases and AI tools, which is key given how fast AI is evolving and the constant onslaught of capabilities. The group also offers structured support to help employees learn how to use the tools.
How often does the group meet, and what was your first order of business?
We meet monthly as a group, and in between, there are focused sessions within their respective departments.
One of the first things I did was meet individually with leaders to help identify a few solid use cases that could really move the needle for their teams. Some were ready to go; others had no idea where to start. We spent a lot of time brainstorming, understanding where the underlying tech is, and recognizing that in some functions, the tech just isn't there yet.
But in other functions, like coding tools in engineering or natural language-based solutions for reviewing contracts in legal, the tools are ready.
What happens next?
We carve out time and space for people to experiment. For our engineering teams, we run office hours and jam sessions, which are essentially open collaborations, to help people learn coding tools, like Cursor and Windsurf. We also held an AI coding week to help everyone start using an AI tool on the job.
LLM solutions are effective for language-focused work that's labor intensive. When teams experiment with those tools, they see their work accelerate quickly. We make time for experimentation; it doesn't just happen. But usually people see something that impresses them, and AI starts to sell itself.
What's the group doing to support employees who are less open to AI?
People are at different places on the adoption and enthusiasm curve. Some are excited about an open-ended jam session. Others need structure, where they're required to try a tool on ticketed work, or assigned tasks or projects, and get help as they go.
Our group has learned that we need offerings at different levels. It's important that everyone comes along to some degree, but not everyone is going to have the same level of zeal, and that's OK.
How are you measuring success for AI Forward?
We're tracking several metrics: how often people use AI, which tools they use, their confidence in using them safely, and their overall sentiment about AI.
There's often a focus on adoption in terms of efficiency or hours saved, but people tend to misjudge that. AI might not always save time, but it might help you create a better product because you explored six different directions to test options before feeling confident you've landed on the best one. We're careful about sentiment because AI is disruptive and can feel threatening. Pushing AI without acknowledging that nuance feels tone deaf.
What have you learned from AI Forward?
We've seen patterns emerge in our data in three phases. First, people feel enthusiastic because they've been told AI is magic and will solve everything. Then, there's this middle-ground disillusionment, where people have had some interaction with AI tools, but they haven't worked or lived up to the hype. There's a narrative around AI replacing jobs versus augmenting them.
The ideal third phase comes when people start to use AI and don't feel threatened by it. They see that it makes them better at their job. They also get that without real people, AI can't do meaningful, impactful work.
Sentiment depends on where the individual or team is in their adoption effort and how successful they've been at finding the right use cases. Based on internal data ranging from the use of enterprise-wide AI productivity tools, procurement requests for new AI products, and anecdotes across teams, it's clear that a vast majority of employees have, at minimum, tried AI in their day-to-day work.
What's your advice for companies that want to start similar AI working groups?
Even though AI is novel in many ways, especially in how it affects people psychologically and emotionally, it's also pretty familiar.
While there's a tendency to get caught up in technology, the real challenge is the humans. I recommend focusing on them: bring people together, make them feel safe, and give them a reason and a space to pay attention. It needs to feel good and encouraging, not alienating.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nvidia Stock: Buy at the Current High?
Nvidia Stock: Buy at the Current High?

Yahoo

timean hour ago

  • Yahoo

Nvidia Stock: Buy at the Current High?

Nvidia stock has surged 1,500% over the past five years -- and this week it reached a record high. The company has shown its strengths in the high-growth area of artificial intelligence, and that's translated into soaring revenue. 10 stocks we like better than Nvidia › Nvidia (NASDAQ: NVDA) has proven itself to be at the center of the artificial intelligence (AI) revolution. The company designs the most sought-after AI chips to power the performance of AI models and has expanded into a full range of AI products and services, from networking to enterprise software and even a new compute marketplace offering. All of these efforts have helped Nvidia's earnings roar higher, and the company ended the latest fiscal year at a record revenue level of $130 billion. To further illustrate the pace of growth, investors only have to look back two years. Then, Nvidia's annual revenue totaled $27 billion. Nvidia clearly has been a winner in this AI boom. This victory extends to stock price performance, with the shares climbing a jaw-dropping 1,500% over the past five years to reach a new high this week. Now the logical question is: Should you buy Nvidia at this high or wait for a lower entry point? Nvidia has played and surely will continue to play a pivotal role in the AI story. Nvidia sells the most powerful graphics processing units (GPUs) on the market and has designed a variety of other products to accompany them. So customers, for example, might use Nvidia GPUs along with its high-speed connection NVLink so processors can share data. Customers may opt for Nvidia application software to build AI agents and various AI workflows, or the company's infrastructure software to manage processes. And just recently, Nvidia launched DGX Cloud Lepton, a marketplace where developers can access GPUs from a variety of connected cloud providers. Thanks to its innovation throughout the AI universe, Nvidia has made itself an almost unavoidable option for most companies aiming to develop and apply AI to their businesses. Importantly, Nvidia also has been first to market with many of its products and services, allowing it to take the lead, and its ongoing innovation and this effort to continually offer customers more service options may keep it there. It's no surprise that all of this has resulted in soaring earnings -- rising in the double- and triple-digit percentages -- and high profitability on sales. Nvidia has maintained gross margin exceeding 70% during most quarters, only declining to 60% in the recent quarter due to a charge linked to lost sales in China. This leads me to the main risk to Nvidia right now, and that is its presence in that particular market, one that made up 13% of sales last year. The U.S. has imposed controls on exports of chips to China, blocking Nvidia's access to that market. The move prompted Nvidia to remove China from its sales forecasts due to being unable to predict what might happen. Nvidia surely would see higher growth if it could sell chips to China, but even without that market, growth is solid. It's important to remember that U.S. customers actually make up nearly half of Nvidia's total sales. Even in the worst scenario -- zero sales in China -- Nvidia's AI growth story remains bright. Even with growth going strong and the future looking bright, investors might wonder if buying Nvidia now, at a new high, is a good idea. The stock trades for 35 times forward earnings estimates, higher than a few weeks ago, but lower than a peak of more than 50 just a few months ago. Considering Nvidia's earnings track record, market position, and future prospects, this looks like a reasonable price -- even if it's not at the dirt cheap levels of a few weeks ago. Of course, stocks rarely rise in one straight line, so there very well could be a dip in the weeks or months to come, offering an even more enticing entry point. But it's very difficult to time the market and get in at any stock's lowest point. It's a better idea to buy at a reasonable price and hold on for the long term. And here's why: Nvidia's gains or losses over a period of weeks or one quarter, for example, won't make much of a difference in your returns if you hold onto the stock for several years. That's why you don't necessarily have to worry about buying at the high when you're a long-term investor, as long as the stock's valuation is fair. That's the case of top AI stock Nvidia right now, making it a buy -- even at the high. Before you buy stock in Nvidia, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Nvidia wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $704,676!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $950,198!* Now, it's worth noting Stock Advisor's total average return is 1,048% — a market-crushing outperformance compared to 175% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 23, 2025 Adria Cimino has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy. Nvidia Stock: Buy at the Current High? was originally published by The Motley Fool

We've all got to do more to protect kids from AI abuse in schools
We've all got to do more to protect kids from AI abuse in schools

New York Post

timean hour ago

  • New York Post

We've all got to do more to protect kids from AI abuse in schools

For the sake of the next generation, America's elected officials, parents and educators need to get serious about curbing kids' use of artificial intelligence — or the cognitive consequences will be devastating. As Rikki Schlott reported in Wednesday's Post, an MIT Media Lab study found that people who used large language models like ChatGPT to write essays had reduced critical thinking skills and attention spans and showed less brain activity while working than those who didn't rely on the AI's help. And over time the AI-users grew to rely more heavily on the tech, going from using it for small tweaks and refinement to copying and pasting whole portions of whatever the models spit out. Advertisement A series of experiments at UPenn/Wharton had similar results: Participants who used large language models like ChatGPT were able to research topics faster than those who used Google, but lagged in retaining and understanding the information they got. That is: They weren't actually learning as much as those who had to actively seek out the information they needed. The bottom line: Using AI for tasks like researching and writing makes us dumber and lazier. Advertisement Even scarier, the MIT study showed that the negative effects of AI are worse for younger users. That's bad news, because all signs are that kids are relying more and more on tech in classrooms. A Pew poll in January found that some 26% of teens aged 13 to 17 admit to using AI for schoolwork — twice the 2023 level. It'll double again, faster still, unless the adults wake up. Advertisement We've known for years how smartphone use damages kids: shorter attention spans, less fulfilling social lives, higher rates of depression and anxiety. States are moving to ban phones in class, but years after the dangers became obvious — and long after the wiser private schools cracked down. This time, let's move to address the peril before a generation needlessly suffers irrevocable harm. Some two dozen states have issued guidance on AI-use in classrooms, but that's only a start: Every state's education officials should ensure that every school cracks down. Advertisement Put more resources into creating reliable tools and methods to catch AI-produced work — and into showing teachers how to stop it and warning parents and students of the consequences of AI overuse. Absent a full-court press, far too many kids won't build crucial cognitive skills because a chat bot does all the heavy lifting for them while their brains are developing. Overall, AI should be a huge boon for humanity, eliminating vast amounts of busy work. But doing things the hard way remains the best way to build mental 'muscle.' If the grownups don't act, overdependence on AI wll keep spreading through America's classrooms like wildfire. Stop it now — before the wildfire burns out a generation of young minds.

Elon Musk and Mark Zuckerberg Want to Control AI by Crushing ChatGPT's Father
Elon Musk and Mark Zuckerberg Want to Control AI by Crushing ChatGPT's Father

Gizmodo

timean hour ago

  • Gizmodo

Elon Musk and Mark Zuckerberg Want to Control AI by Crushing ChatGPT's Father

The AI race was never going to be polite. But what's unfolding in Silicon Valley in 2025 looks more like Succession meets Black Mirror than a traditional tech rivalry. Forget code. This is about power, control, and a rapidly closing window to dominate the most transformative technology in history. At the center of the fight: three men, three worldviews, and one finish line. Let's break down the combatants. This one is personal and litigious. Musk and Altman co-founded OpenAI in 2015 as a nonprofit devoted to building safe, open-source artificial intelligence. But the bromance collapsed when Musk attempted to take control of the company in 2018 and failed. He left bitterly and has been attacking OpenAI ever since. In 2023, Musk sued OpenAI and Altman, accusing them of betraying the nonprofit's mission by aligning too closely with Microsoft and putting profit over safety. The lawsuit is still grinding through federal court. Among other things, it claims OpenAI's flagship product, ChatGPT, is a closed-source commercial weapon funded by Big Tech and wrapped in secrecy. Altman denies the betrayal and OpenAI has countersued. The legal drama is thick and both sides have subpoenaed internal documents. Meanwhile, Musk's xAI is developing its own ChatGPT rival and launching it on X (formerly Twitter). This is a very public and very expensive fight over who gets to define ethical AI. Stakes: Both want to build AGI, or Artificial General Intelligence, a system smarter than humans. Musk wants to do it his way with radical transparency and no corporate strings. Altman wants to do it with Microsoft money, oversight, and a mission-first approach. The future of AI safety and perhaps civilization is the prize. They were supposed to be on the same team. Microsoft has invested over $13 billion in OpenAI and uses ChatGPT to power Bing, Copilot, and Azure. But now the two companies are increasingly at odds and headed for a potential breach. Microsoft has quietly built its own internal AI team called MAI, which is developing foundation models independent of OpenAI. The company wants more control, fewer surprises, and possibly a total replacement. Altman, meanwhile, has turned OpenAI into a hybrid nonprofit-corporate juggernaut. He's building custom chips, launching an AI app store, and moving fast into hardware and enterprise services. Microsoft sees this as direct competition. It's a fraying marriage held together by mutual benefit, but barely. Stakes: A real split could upend the entire enterprise AI ecosystem and open the door for rivals like Google, Meta, or Anthropic to swoop in. This relationship could end with another courtroom clash. It's the quietest war but maybe the most cutthroat. Meta has made AI its top priority for 2025 and Zuckerberg is going straight for Altman's team. In recent months, Meta has offered $100 million and more in signing bonuses to OpenAI researchers in a bid to poach top talent, Altman says. So far, most have stayed loyal to Altman. But the scale of the offers has shocked the Valley. In a podcast with his brother, Altman didn't mince words: 'They started making these, like, giant offers to a lot of people on our team, you know, like $100 million signing bonuses,' Altman said, adding:'It is crazy.' He accused Meta of 'just trying to copy OpenAI, down to the UI mistakes.' Zuckerberg's strategy is familiar. Outspend, out-recruit, outlast. Meta's AI tools are still basic compared to ChatGPT, but with enough hires and acquisitions (like rumored talks with voice-AI startup PlayAI), Meta hopes to leapfrog the field. Stakes: Zuckerberg is fighting not just for dominance in AI, but for relevance. If Meta fails to catch up, it could be left behind in a world where AI, not social media, is the next major computing platform. New episode of Uncapped with @sama. Enjoy 🤗 — Jack Altman (@jaltma) June 17, 2025The AI race has become a war of personalities. Altman, the techno-missionary. Musk, the chaos capitalist. Zuckerberg, the empire builder. Each believes they are the only one who can lead humanity into the next era of intelligence. What's unfolding is a battle for the infrastructure of the 21st century: who owns the models, who trains the machines, and who gets to decide what AI thinks. And if the lawsuits, subpoenas, and poaching wars are any indication, they're willing to burn billions to win.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store