logo
#

Latest news with #AIForward

CarGurus created a working group for AI experimentation. Employees are buying in.
CarGurus created a working group for AI experimentation. Employees are buying in.

Business Insider

time25-06-2025

  • Automotive
  • Business Insider

CarGurus created a working group for AI experimentation. Employees are buying in.

CarGurus launched a group tasked with helping employees explore and adopt AI tools. Sarah Rich, a lead coordinator of the group, says the effort helps people use AI more effectively. This article is part of " Culture of Innovation," a series on how businesses can prompt better ideas. There's no shortage of hype around the potential for AI to transform the workplace. A recent McKinsey report compared the tech to the birth of the internet and the arrival of the steam engine. But, its reality is still taking shape. AI adoption is inconsistent at most organizations, workers have varying levels of interest, and there's often a difference between AI buzz and its practical application. CarGurus, an online marketplace for buying and selling cars, is one company trying to bridge that divide. Last October, it launched AI Forward, a 20-person working group that brings together leaders across departments, including product, engineering, legal, and sales. The group's goal is to identify the right applications for AI, evaluate potential tools, and encourage employee experimentation through workshops, one-on-one guidance, and pilot programs. "If everyone has to figure out AI tools on their own, we risk losing interest," said Sarah Rich, a senior principal data scientist at CarGurus and a lead coordinator of AI Forward. "We're trying to offer cheat sheets and share what's working." She added that once employees see how AI can make their day-to-day more efficient or offer new approaches, they tend to get on board. "We want to make sure that when we ask people to invest time in AI, they're going to quickly see a reward." Rich spoke with Business Insider about how AI Forward is helping employees gain the confidence to explore the technology. The following has been edited for clarity and length. Business Insider: What was the reason for AI Forward? Sarah Rich: There's a lot of pressure to get ahead with AI. And I imagine this is the case at many companies — there's a sense that if you don't keep up, you're leaving innovation on the table. At the same time, there's a gap between the excitement around AI and understanding what it means for each role. We started AI Forward to meet every business unit and function where they are. The group works together to evaluate use cases and AI tools, which is key given how fast AI is evolving and the constant onslaught of capabilities. The group also offers structured support to help employees learn how to use the tools. How often does the group meet, and what was your first order of business? We meet monthly as a group, and in between, there are focused sessions within their respective departments. One of the first things I did was meet individually with leaders to help identify a few solid use cases that could really move the needle for their teams. Some were ready to go; others had no idea where to start. We spent a lot of time brainstorming, understanding where the underlying tech is, and recognizing that in some functions, the tech just isn't there yet. But in other functions, like coding tools in engineering or natural language-based solutions for reviewing contracts in legal, the tools are ready. What happens next? We carve out time and space for people to experiment. For our engineering teams, we run office hours and jam sessions, which are essentially open collaborations, to help people learn coding tools, like Cursor and Windsurf. We also held an AI coding week to help everyone start using an AI tool on the job. LLM solutions are effective for language-focused work that's labor intensive. When teams experiment with those tools, they see their work accelerate quickly. We make time for experimentation; it doesn't just happen. But usually people see something that impresses them, and AI starts to sell itself. What's the group doing to support employees who are less open to AI? People are at different places on the adoption and enthusiasm curve. Some are excited about an open-ended jam session. Others need structure, where they're required to try a tool on ticketed work, or assigned tasks or projects, and get help as they go. Our group has learned that we need offerings at different levels. It's important that everyone comes along to some degree, but not everyone is going to have the same level of zeal, and that's OK. How are you measuring success for AI Forward? We're tracking several metrics: how often people use AI, which tools they use, their confidence in using them safely, and their overall sentiment about AI. There's often a focus on adoption in terms of efficiency or hours saved, but people tend to misjudge that. AI might not always save time, but it might help you create a better product because you explored six different directions to test options before feeling confident you've landed on the best one. We're careful about sentiment because AI is disruptive and can feel threatening. Pushing AI without acknowledging that nuance feels tone deaf. What have you learned from AI Forward? We've seen patterns emerge in our data in three phases. First, people feel enthusiastic because they've been told AI is magic and will solve everything. Then, there's this middle-ground disillusionment, where people have had some interaction with AI tools, but they haven't worked or lived up to the hype. There's a narrative around AI replacing jobs versus augmenting them. The ideal third phase comes when people start to use AI and don't feel threatened by it. They see that it makes them better at their job. They also get that without real people, AI can't do meaningful, impactful work. Sentiment depends on where the individual or team is in their adoption effort and how successful they've been at finding the right use cases. Based on internal data ranging from the use of enterprise-wide AI productivity tools, procurement requests for new AI products, and anecdotes across teams, it's clear that a vast majority of employees have, at minimum, tried AI in their day-to-day work. What's your advice for companies that want to start similar AI working groups? Even though AI is novel in many ways, especially in how it affects people psychologically and emotionally, it's also pretty familiar. While there's a tendency to get caught up in technology, the real challenge is the humans. I recommend focusing on them: bring people together, make them feel safe, and give them a reason and a space to pay attention. It needs to feel good and encouraging, not alienating.

From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025
From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025

National Post

time01-05-2025

  • Business
  • National Post

From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025

Article content TAIPEI — GIGABYTE Technology, a global leader in computing innovation, will return to COMPUTEX 2025 from May 20 to 23 under the theme 'Omnipresence of Computing: AI Forward.' Demonstrating how GIGABYTE's complete spectrum of solutions spanning the AI lifecycle, from data center training to edge deployment and end-user applications reshapes the infrastructure to meet the next-gen AI demands. Article content Article content GIGABYTE Technology, a global leader in computing innovation, will return to COMPUTEX 2025 from May 20 to 23 under the theme 'Omnipresence of Computing: AI Forward.' Article content As generative AI continues to evolve, so do the demands for handling massive token volumes, real-time data streaming, and high-throughput compute environments. GIGABYTE's end-to-end portfolio – ranging from rack-scale infrastructure to servers, cooling systems, embedded platforms, and personal computing —forms the foundation to accelerate AI breakthroughs across industries. Article content At the heart of GIGABYTE exhibit is the enhanced GIGAPOD, a scalable GPU cluster designed for high-density data center and large AI model training. Designed for high-performance AI workloads, GIGAPOD supports the latest accelerating platforms including AMD Instinct™ MI325X and NVIDIA HGX™ H200. It is now integrated with GPM (GIGABYTE POD Manager), GIGABYTE's proprietary infra and workflow management platform, which can enhance operational efficiency, streamline management, and optimize resource utilization across large-scale AI environments. Article content This year will also see the debut of the GIGAPOD Direct Liquid Cooling (DLC) variant, incorporating GIGABYTE's G4L3 series servers and engineered for next-gen chips with TDPs exceeding 1,000W. The DLC solution is demonstrated in a 4+1 rack configuration in partnership with Kenmec, Vertiv, and nVent, featuring integrated cooling, power distribution, and network architecture. To help customers deploy faster and smarter, GIGABYTE offers end-to-end consulting services, including planning, deployment, and system validation, accelerating the path from concept to operation. Article content As AI adoption shifts from training to deployment, GIGABYTE's flexible system design and architecture ensure seamless transition and expansion. GIGABYTE presents the cutting-edge NVIDIA GB300 NVL72, a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace™ CPUs in a single platform optimized for test-time scaling inference. Also shown at the booth are two OCP-compliant server racks: an 8OU AI system with NVIDIA HGX™ B200 integrated with Intel® Xeon® processors, and an ORV3 CPU-based storage rack with JBOD design to maximize density and throughput. Article content GIGABYTE also exhibits modular and diverse servers from high-performance GPU to storage-optimized to meet different AI workloads: Article content Accelerated Compute: Air- and liquid-cooled servers for the latest AMD Instinct™MI325X, Intel® Gaudi® 3, and NVIDIA HGX™ B300 GPU platforms, optimized for GPU-to-GPU interconnects CXL Technology: CXL-enabled systems unlock shared memory pools across CPUs for real-time AI inference High-density Compute & Storage: Multi-node servers packed with high-core count CPUs and NVMe/E1.S storage, developed in collaboration with Solidigm, ADATA, Kioxia, and Seagate Cloud & Edge Platforms: Blade and node solutions optimized for power, thermal efficiency, and workload diversity—ideal for hyperscalers and managed service providers Article content Extending AI to real-world applications, GIGABYTE introduces a new generation of embedded systems and mini PCs that bring compute closer to where data is generated. Article content Jetson-Powered Embedded Systems: Featuring NVIDIA® Jetson Orin™, these rugged platforms power real-time edge AI in industrial automation, robotics, and machine vision. Article content BRIX Mini PCs: Compact yet powerful, the latest BRIX systems include onboard NPUs and support Microsoft Copilot+ and Adobe AI tools, perfect for lightweight AI inference at the edge. Article content Expanding leadership from cloud to the edge, GIGABYTE delivers powerful on-premises AI acceleration with our advanced Z890 / X870 motherboards and cutting-edge GeForce RTX 50 and Radeon RX 9000 Series graphics cards. The innovative AI TOP local AI computing solution simplifies complex AI workflows through memory offloading and multi-node clustering capabilities. This AI innovation extends throughout our consumer lineup – from Microsoft-certified Copilot+ AI PCs and gaming powerhouses to high-refresh OLED monitors. On laptops, the exclusive 'Press and Speak' GIMATE AI agent enables intuitive hardware control, enhancing both productivity and everyday AI experiences. Article content Article content Article content Article content Article content

From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025
From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025

Business Wire

time01-05-2025

  • Business
  • Business Wire

From Scalable Solutions to Full-Stack AI Infrastructure, GIGABYTE to Present End-to-End AI Portfolio at COMPUTEX 2025

TAIPEI--(BUSINESS WIRE)--GIGABYTE Technology, a global leader in computing innovation, will return to COMPUTEX 2025 from May 20 to 23 under the theme "Omnipresence of Computing: AI Forward." Demonstrating how GIGABYTE's complete spectrum of solutions spanning the AI lifecycle, from data center training to edge deployment and end-user applications reshapes the infrastructure to meet the next-gen AI demands. GIGABYTE Technology, a global leader in computing innovation, will return to COMPUTEX 2025 from May 20 to 23 under the theme "Omnipresence of Computing: AI Forward." Share As generative AI continues to evolve, so do the demands for handling massive token volumes, real-time data streaming, and high-throughput compute environments. GIGABYTE's end-to-end portfolio - ranging from rack-scale infrastructure to servers, cooling systems, embedded platforms, and personal computing —forms the foundation to accelerate AI breakthroughs across industries. Scalable AI Infrastructure Starts Here: GIGAPOD with GPM Integration At the heart of GIGABYTE exhibit is the enhanced GIGAPOD, a scalable GPU cluster designed for high-density data center and large AI model training. Designed for high-performance AI workloads, GIGAPOD supports the latest accelerating platforms including AMD Instinct™ MI325X and NVIDIA HGX™ H200. It is now integrated with GPM (GIGABYTE POD Manager), GIGABYTE's proprietary infra and workflow management platform, which can enhance operational efficiency, streamline management, and optimize resource utilization across large-scale AI environments. This year will also see the debut of the GIGAPOD Direct Liquid Cooling (DLC) variant, incorporating GIGABYTE's G4L3 series servers and engineered for next-gen chips with TDPs exceeding 1,000W. The DLC solution is demonstrated in a 4+1 rack configuration in partnership with Kenmec, Vertiv, and nVent, featuring integrated cooling, power distribution, and network architecture. To help customers deploy faster and smarter, GIGABYTE offers end-to-end consulting services, including planning, deployment, and system validation, accelerating the path from concept to operation. Built for Deployment: From Super Compute Module to Open Compute and Custom Workloads As AI adoption shifts from training to deployment, GIGABYTE's flexible system design and architecture ensure seamless transition and expansion. GIGABYTE presents the cutting-edge NVIDIA GB300 NVL72, a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace™ CPUs in a single platform optimized for test-time scaling inference. Also shown at the booth are two OCP-compliant server racks: an 8OU AI system with NVIDIA HGX™ B200 integrated with Intel® Xeon® processors, and an ORV3 CPU-based storage rack with JBOD design to maximize density and throughput. GIGABYTE also exhibits modular and diverse servers from high-performance GPU to storage-optimized to meet different AI workloads: Accelerated Compute: Air- and liquid-cooled servers for the latest AMD Instinct™MI325X, Intel® Gaudi® 3, and NVIDIA HGX™ B300 GPU platforms, optimized for GPU-to-GPU interconnects CXL Technology: CXL-enabled systems unlock shared memory pools across CPUs for real-time AI inference High-density Compute & Storage: Multi-node servers packed with high-core count CPUs and NVMe/E1.S storage, developed in collaboration with Solidigm, ADATA, Kioxia, and Seagate Cloud & Edge Platforms: Blade and node solutions optimized for power, thermal efficiency, and workload diversity—ideal for hyperscalers and managed service providers Bringing AI to the Edge—and to Everyone Extending AI to real-world applications, GIGABYTE introduces a new generation of embedded systems and mini PCs that bring compute closer to where data is generated. Jetson-Powered Embedded Systems: Featuring NVIDIA® Jetson Orin™, these rugged platforms power real-time edge AI in industrial automation, robotics, and machine vision. BRIX Mini PCs: Compact yet powerful, the latest BRIX systems include onboard NPUs and support Microsoft Copilot+ and Adobe AI tools, perfect for lightweight AI inference at the edge. Expanding leadership from cloud to the edge, GIGABYTE delivers powerful on-premises AI acceleration with our advanced Z890 / X870 motherboards and cutting-edge GeForce RTX 50 and Radeon RX 9000 Series graphics cards. The innovative AI TOP local AI computing solution simplifies complex AI workflows through memory offloading and multi-node clustering capabilities. This AI innovation extends throughout our consumer lineup - from Microsoft-certified Copilot+ AI PCs and gaming powerhouses to high-refresh OLED monitors. On laptops, the exclusive "Press and Speak" GIMATE AI agent enables intuitive hardware control, enhancing both productivity and everyday AI experiences. GIGABYTE invites everyone to explore the AI Forward era, defined by scalable architecture, precision engineering, and a commitment to accelerating progress.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store