
Is 'Sweatshop Data' Really Over?
If you're reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.
What to Know: The future of 'sweatshop data'
You can measure time in the world of AI by the cadence of new essays with provocative titles. Another one arrived earlier this month from the team at Mechanize Work: a new startup that is trying to, er, automate all human labor. Its title? 'Sweatshop data is over.'
This one caught my attention. As regular readers may know, I've done a lot of reporting over the years on the origins of the data that is used to train AI systems. My story 'Inside Facebook's African Sweatshop' was the first to reveal how Meta used contractors in Kenya, some earning as little as $1.50 per hour, to remove content from their platforms—content that would later be used in attempts to train AI systems to do that job automatically. I also broke the news that OpenAI used workers from the same outsourcing company to detoxify ChatGPT. In both cases, workers said the labor left them with diagnoses of post-traumatic stress disorder. So if sweatshop data really is a thing of the past, that would be a very big deal indeed.
What the essay argues — Mechanize Work's essay points to a very real trend in AI research. To summarize: AI systems used to be relatively unintelligent. To teach them the difference between, say, a cat and a dog, you'd need to give them lots of different labeled examples of cats and dogs. The most cost-effective way to get those labels was from the Global South, where labor is cheap. But as AI systems have gotten smarter, they no longer need to be told basic information, the authors argue. AI companies are now desperately seeking expert data, which necessarily comes from people with PhDs—and who won't put up with poverty wages. 'Teaching AIs these new capabilities will require the dedicated efforts of high-skill specialists working full-time, not low-skill contractors working at scale,' the authors argue.
A new AI paradigm — The authors are, in one important sense, correct. The big money has indeed moved toward expert data. A clutch of companies, including Mechanize Work, are jostling to be the ones to dominate the space, which could eventually be worth hundreds of billions of dollars, according to insiders. Many of them aren't just hiring experts, but are also building dedicated software environments to help AI learn from experience at scale, in a paradigm known as reinforcement learning with verifiable rewards. It takes inspiration from DeepMind's 2017 model AlphaZero, which didn't need to observe humans playing chess or Go, and instead became superhuman just by playing against itself millions of times. In the same vein, these companies are trying to build software that would allow AI to 'self-play,' with the help of experts, on questions of coding, science, and math. If they can get that to work, it could potentially unlock major new leaps in capability, top researchers believe.
There's just one problem — While all of this is true, it does not mean that sweatshop data has gone away. 'We don't observe the workforce of data workers, in the classical sense, decreasing,' says Milagros Miceli, a researcher at the Weizenbaum Institute in Berlin who studies so-called sweatshop data. 'Quite the opposite.'
Meta and TikTok, for example, still rely on thousands of contractors all over the world to remove harmful content from their systems—a task that has stubbornly resisted full AI automation. Other types of low-paid tasks, typically carried out in places like Kenya, the Philippines, and India, are booming.
'Right now what we are seeing is a lot of what we call algorithmic verification: people checking in on existing AI models to ensure that they are functioning according to plan,' Miceli says. 'The funny thing is, it's the same workers. If you talk to people, they will tell you: I have done content moderation. I have done data labeling. Now I am doing this.'
Who to Know: Shengjia Zhao, Chief Scientist, Meta Superintelligence Labs
Mark Zuckerberg promoted AI researcher Shengjia Zhao to chief scientist of the new effort inside Meta to create 'superintelligence.' Zhao joined Meta last month from OpenAI, where he worked on the o1-mini and o3-mini models.
Zuck's memo — In a note to staff on Saturday, Zuckerberg wrote: 'Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field.' Zhao, who studied for his undergraduate degree in Beijing and graduated from Stanford with a PhD in 2022, 'will set the research agenda and scientific direction for our new lab,' Zuckerberg wrote.
Meta's recruiting push — Zuckerberg has ignited a fierce war for talent in the AI industry by offering top AI researchers pay packages worth up to $300 million, according to reports. 'I've lost track of how many people from here they've tried to get,' Sam Altman told OpenAI staff in a Slack message, according to the Wall Street Journal.
Bad news for LeCun — Zhao's promotion is yet another sign that Yann LeCun—who until the hiring blitz this year was Meta's most senior AI scientist—has been put out to pasture. A notable critic of the idea that LLMs will scale to superintelligence, LeCun's views appear to be increasingly at odds with Zuckerberg's bullishness. Meta's Superintelligence team is clearly now a higher priority for Zuckerberg than the separate group LeCun runs, called Facebook AI Research (FAIR). In a note appended to his announcement of Zhao's promotion on Threads, Zuckerberg denied that LeCun had been sidelined. 'To avoid any confusion, there's no change in Yann's role,' he wrote. 'He will continue to be Chief Scientist for FAIR.'
AI in Action
One of the big ways AI is already affecting our world is in the changes it's bringing to our information ecosystem. News publishers have long complained that Google's 'AI Overviews' in its search results have reduced traffic, and therefore revenues, harming their ability to employ journalists and hold the powerful to account. Now we have new data from the Pew Research Center that puts that complaint into stark relief.
When AI summaries are included in search results, only 8% of users click through to a link — down from 15% without an AI summary, the study found. Just 1% of users clicked on any link in that AI summary itself, rubbishing the argument that AI summaries are an effective way of sending users toward publishers' content.
As always, if you have an interesting story of AI in Action, we'd love to hear it. Email us at: intheloop@time.com
What We're Reading
'How to Save OpenAI's Nonprofit Soul, According to a Former OpenAI Employee,' by Jacob Hilton in TIME
Jacob Hilton, who worked at OpenAI between 2018 and 2023, writes about the ongoing battle over OpenAI's legal structure—and what it might mean for the future of our world.
'The nonprofit still has no independent staff of its own, and its board members are too busy running their own companies or academic labs to provide meaningful oversight,' he argues. 'To add to this, OpenAI's proposed restructuring now threatens to weaken the board's authority when it instead needs reinforcing.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNN
33 minutes ago
- CNN
Meet the Police Officers Using AI to Draft Police Reports - Terms of Service with Clare Duffy - Podcast on CNN Podcasts
Clare Duffy 00:00:00 This spring, I spent a day at the police station in Fort Collins, Colorado. It's a department of about 240 officers serving a city of nearly 200,000 people at the base of the Rocky Mountain foothills. It's college town, home to Colorado State University. Sgt. Bob Younger 00:00:17 Nice to meet you, Clare. Emily, right? Nice to meet you. Clare Duffy 00:00:20 Last year, Fort Collins became one of the first police departments in the country to start using Draft One, a software program that uses artificial intelligence to draft police reports. It's made by Axon, a company that also sells teasers, body cameras, and cloud storage for body cam video to police departments across the country. Fort Collins Police Chief Jeff Swoboda told me why they decided to try out the tool shortly after Axon launched it. Chief Jeff Swoboda 00:00:47 AI is coming; AI is here, and it's going to be a part of our daily lives, and we felt we would rather be early adopters and help kind of make the program better. Clare Duffy 00:00:58 'We bring you a lot of stories on this podcast about how AI is reshaping the world around us. So when we learned about Draft One, we wanted to take a deeper look because this technology has the potential to go mainstream. It's still really new. Axon launched it in the spring of 2024, but it's already being used by police officers in cities like Fort Collins, Colorado, and Lafayette, Indiana, and Tampa, Florida. Axon says it's the fastest growing product they've ever launched. But it's come at a time when the use of AI by law enforcement is controversial after experiments with other technologies have gone wrong. And as I've learned throughout this reporting, while police reports might seem minor, they actually have a major impact on the criminal justice process. So if this new police tool is being used by officers in your community, or your local police department is considering using it, what should you know about it? How is it changing day-to-day work for officers? And what legal and ethical questions does it raise? Over two episodes, we're going to dive into those questions and more. You'll hear from officers with the Fort Collins PD, a legal advocate, a criminal justice expert, and one of Axon's executives. I'm Clare Duffy, and this is a special episode of Terms of Service. We'll dive in after a quick break. Draft One is a software program that uses the audio from police-worn body cameras to generate a draft of a police report that an officer is then prompted to finish by filling in additional details. Axon isn't the only player in this industry. Law enforcement tech company Trulio makes a similar AI police report tool called Field Notes. But Axon is a big name in the law enforcement world, and a lot of the early conversations, research, and writing about AI-assisted police reports focus on Draft One. The main pitch for it is simple: They say it allows officers to spend less of their time writing reports and more time out in the field. Here's Chief Swoboda again. Chief Jeff Swoboda 00:03:13 The part that really opened my eyes and made me realize this is something bigger and better is a young officer said, you know what? I'm so thankful for this program because it's really a wellness play. And I said, well, tell me more. What do you mean by that? He goes, I'm busy out on the street. And so I go call to call, and I know I have a report to write, and I go to the next call, and I have another report to write. It's not very long before I know I have four reports to write. It used to weigh on his mind. Like I have to find some time to start writing these reports. Well, now he feels much calmer about that because he knows there's a draft waiting for him for each one of those calls. So, he goes, I'm just less anxious. Clare Duffy 00:03:50 Fort Collins piloted Draft One last year with a group of about 70 officers. They were asked to track how long it took them to write their reports, with and without the tool. Officer Scott Brittingham was part of that test group. Officer Scott Brittingham 00:04:04 We would type out a police report and time it. We would paste that into a document. Then we would do a separate Draft One report and time that, also paste that in so you could see side by side how the two reports compare. And reports that were taking me maybe a half hour to 45 minutes to type were taking me 10 minutes or less. Clare Duffy 00:04:23 Scott is also a training officer, so he writes reports himself, and he also shows new hires how to write them. Officer Scott Brittingham 00:04:30 Report writing was always something I took a lot of pride in and worked very hard on, so I believe that my reports read well when I type them, but that's not the case for everybody. So I think this is really something that could not only help people that are putting out a lot of reports to be more efficient, but maybe people whose strong suit isn't that writing process to help them be a little more fluid in their reports. Clare Duffy 00:04:51 Will you just talk a little bit more about why that is an important part of the job? Officer Scott Brittingham 00:04:55 Yeah, so the importance in report writing is obviously to document what happened. And there's a lot of times when things go to a criminal trial, if it goes all the way that far, it can be a year or more until that finally happens. And you're not going to always remember like it was yesterday. So the report is very important in not only helping you recall what happened, but to explain to the prosecutor, to the defense, to paint a very clear picture of what happened, all of the senses, what you're seeing, what you're smelling, what you're hearing, because that's all a part of it. So it's important to have all those little details, but also not be so wordy that somebody needs to put it down for a day and then come back to it to finish your report. So kind of using important words and impactful words in a more condensed way. Clare Duffy 00:05:39 That feels like a really important point, like whether you use the AI or you don't, you're still accountable to that report. You may be asked questions about the report in court. Officer Scott Brittingham 00:05:51 Correct. And the report is your refresher. If you have to testify in court, you don't get to sit up there and read your report. So you still have to get up there and talk about what happened from your memory. But then you have that report to look at to refresh yourself. Prof. Andrew Guthrie Ferguson 00:06:06 Police reports are like the lifeblood of the criminal justice system. Clare Duffy 00:06:11 Andrew Guthrie Ferguson is a law professor and a former defense attorney, so he knows a lot about the role of the police report in the criminal justice system. Prof. Andrew Guthrie Ferguson 00:06:21 'The police report might be the only memorialization of a particular incident. It can be the reason and the way that a prosecutor decides to paper a case, take a case forward, and keep charging it. It can be the document that a judge looks at to decide whether or not an individual should be held without bond until the next hearing. It is the document the defense lawyers get when they get the case to figure out what has happened. And in low-level misdemeanor cases, even low-level felonies, it might be the only documentation you have. There might not be some like other investigation that happens in a low-level misdemeanor. It might be actually limited to what is put into those police reports and memorialized at that first instance. Clare Duffy 00:07:07 'In his academic career, Andrew has focused on technology in policing. And right now, he's really interested in how AI is being used by police departments. One of those applications is facial recognition. That's when police officers feed the image of an unknown face into an algorithm that compares it against a large database of mugshots or driver's license photos to try to identify a suspect. You may have seen news coverage about when this process has gone wrong. Like in 2023, when Detroit police falsely arrested Porcha Woodruff after AI software returned her photo as a potential suspect for a carjacking and robbery. She was eight months pregnant at the time, and police arrested her despite the fact that nothing in the surveillance images or eyewitness statements indicated that a pregnant woman was involved. The prosecutor dismissed the case a month later. So those facial recognition systems have understandably gotten a lot of public attention. But when it comes to AI-drafted police reports, Andrew thinks not many people are aware of this technology outside of law enforcement and legal circles. Prof. Andrew Guthrie Ferguson 00:08:21 'I don't think the general public is necessarily paying attention. I definitely, when I've talked about, oh, I just wrote this article on AI-assisted police reports, they're like, oh my gosh, that's such a fascinating idea. I've never heard of that. Clare Duffy 00:08:32 'In a law review article Andrew wrote about AI-assisted police reports, he laid out what he sees as some of the potential concerns with the technology, starting with how it's trained. Prof. Andrew Guthrie Ferguson 00:08:43 You have to design the system, you have to train the models, and there can be errors or biases in the models of how you're going to get the predictive text to work. Remember, all that's happening with a large language model like ChatGPT4 is it is a predictive text idea. And obviously, depending on how you build the model, you may do that correctly; you may have errors. Clare Duffy 00:09:04 'He pointed out that there could be mistakes in the transcription of the body cam audio that could make their way into an AI-generated draft report, especially if the person being recorded uses slang the AI isn't familiar with or speaks in an accent that it hasn't been trained to recognize. Prof. Andrew Guthrie Ferguson 00:09:21 The transcript that you get, which becomes a police report, might be filled with like misunderstandings because the algorithm didn't understand like a Southern accent. Clare Duffy 00:09:31 In recent weeks, Axon has rolled out the ability for Draft One to create reports from body cam footage with both English and Spanish audio, and to translate the Spanish details into English. Andrew says there's also a concern about potential omissions, anything that won't show up in a transcript. Think nonverbal cues, like if a person nodded but didn't say yes or no out loud. Prof. Andrew Guthrie Ferguson 00:09:56 What the audio doesn't pick up could be very important in a case, and it's only picking up the audio. It's not even seeing the video even though we have the video. Clare Duffy 00:10:04 And then there's a question of hallucinations. That's when AI interjects incorrect or misleading details seemingly out of nowhere. From the reporting I've done on AI, all of these concerns sound familiar to me. We know that while the accuracy of AI has advanced rapidly, it's fallible. It makes mistakes; it sometimes hallucinates. It can also absorb the biases and blind spots of whatever data it's trained on. I know from Andrew and the officers I spoke with in Fort Collins why accurate, thorough police reports are vital to the criminal justice process. So I wanted to know — what's Axon doing to prevent mistakes from ending up in those reports? We'll dig into that after a break. Josh Isner 00:10:59 We try to view ourselves as the preeminent technology company in public safety. We bring disruptive technologies into the market. The market's traditionally been underserved by technology. And so we see the opportunity there, not only to have an impact, but literally to save lives. Clare Duffy 00:11:15 'Josh Isner is the president at Axon. He's worked at Axon since 2009 when the company was still called Taser International. That's how Axon got started, making so-called less lethal defense weapons for police departments. The name was changed in 2017 to Axon, the name of their line of body cameras. Josh and I spoke when he was in New York earlier this year. How do you go from being a Taser company to an AI company? Will you just sort of talk about the genesis of this product? Josh Isner 00:11:44 Yeah, it was hard. It was really hard. It's a story of kind of one thing leading to another. So there were questions about how tasers were being used in the field. So the next logical step was to put a camera on the taser. That started to work, and we said, well, police use a lot of different types of force, not just tasers. Let's put the body camera on the officer instead. That led to a bigger kind of problem slash opportunity where police were generating more digital evidence than they ever had before, and they were trying to manage it on premise. So we came out with which is our cloud platform for managing digital evidence. Currently today, we manage 40 times as much video as the Netflix library. Clare Duffy 00:12:23 That's right, he says there's 40 times as much video as the Netflix library. Even if this is the first time you're hearing of the company, there's a high likelihood Axon products are used in your community. Their clients range from the NYPD to small town police departments across the country. Josh Isner 00:12:42 In the United States, I think it's safe to say that almost every single department uses an Axon product, whether it's a taser or body camera. There are 18,000 departments, and I'd say high 17,000s use something from us. So I don't think we've announced the exact adoption or revenue numbers, but I will say Draft One is our fastest growing product that we've ever brought to market, which is a pretty crazy thing 30 years into the company's life cycle. Clare Duffy 00:13:09 Josh declined to give us a number of how many police departments are currently using Draft One. But, based on his estimate of Axon's current customer base, it's clear the potential growth for the product is high. That's because it's built to work with the existing products, like body cams and that departments are already using. Say I'm a police officer, and I've just been out on a call, and now I want Draft One to help me write my police report. What do I do? How does it work? Josh Isner 00:13:41 Yeah, the cop will get a call for service. There'll be a body camera video and audio transcript. The audio transcript is uploaded to the cloud as is the video. That's when our AI starts to analyze that transcript. And then as soon as you hit generate report, you'll start to see it. If you've ever used the Chat GPT, app and you just see the text populating in the app, that's a similar experience to what you see on And from there, you know, it highlights some of the areas where you need to fill in because it really does have to be the officer's own report at the end of the day, and they have to sign off as to what happened. So it's very important that there's a review process in place, but from there it's submitted to the supervisor for approval, and then it makes its way through the legal process over to the prosecutors and defense attorneys and courts and so forth. Clare Duffy 00:14:28 'And is this based on a mainstream AI model, or is this something that Axon built in-house, or trained further? What's the sort of backend look like? Josh Isner 00:14:37 Sure, so Draft One is Chat GPT based. From there we go through a pretty strenuous process before it hits the market, and a lot of that is testing with our Ethics and Equity Advisory Council, or we call the EEAC. And this group, we've worked with them now for three or four years, and they represent the voices that are not sometimes in the room when we're developing police technology. They're generally a little more skeptical. Their life experiences are a lot different than a lot of ours. And that actually has become a very valuable part of what we do because they would not sign off on this product, rightfully so, unless we could prove that it was rid of inherent biases. Especially as it pertains to race or gender, and so that process of calibration can take some time, and whenever there's a new model out, we have to go back through that cycle to make sure that we're not being irresponsible in any way. Clare Duffy 00:15:28 Who is part of that group? Is that independent advisors? Josh Isner 00:15:30 Sure, picture kind of leading activists in a lot of major cities around the United States, and these folks have come from different social organizations or have shown an interest in equity and equality in policing, and, ultimately, you know, when we have diverse opinions in the room, that's what leads to our best product work, and so we're really proud of this partnership. Clare Duffy 00:15:52 Axon lists members of that council on its website. It also lays out a set of guiding principles it follows when creating new tech products. Since the company handles criminal justice information, it has what's called a Criminal Justice Information Services Certification, which is issued by the FBI. Axon says that certification strictly prohibits them, and Microsoft and AI partners like OpenAI, from using the data collected by its software for AI training unless they get explicit permission from police departments. All these questions about the privacy, accuracy and equity considerations involved in using these tools impact more than just police departments. The technology has broader ramifications in the criminal justice system. What kinds of questions have you gotten from the sort of downstream parts of the process, from the prosecutors, from the judges. Like are there questions about how legit this is? Josh Isner 00:16:49 'Yeah, sure. Right, you know, right off the bat, there was a lot of skepticism. And then what happened is these prosecutors saw the quality of these reports. It wasn't a winning argument to the prosecutor that like, oh, they can do these reports faster now in this police department. But then they saw the quality, and they're, like, holy cow, this is so consistent. And, you know, they went and watched the body camera video and cross-referenced the report. And all of a sudden they're like, man, this is something that's going to make our job easier as a prosecutor. Clare Duffy 00:17:18 'Josh told me he's received largely positive feedback from prosecutors. But there's been at least one case so far of a prosecutor's office saying they will not accept reports written with the help of AI. Last September, the prosecutor's office in King County, Washington, sent a message to police chiefs after local law enforcement agencies expressed interest in using Draft One. In the email, the office said it would, quote, "not accept any police report narratives that have been produced with the assistance of AI." It shared concerns that Draft One could, quote, "likely result in many of your officers approving Axon-drafted narratives with unintentional errors in them." We checked in with the King County Prosecutor's Office on this, and they confirmed that the office's position hasn't changed since they sent out that memo. We also asked Axon about this. A spokesperson said that they are, quote, "committed to continuous collaboration with police agencies, prosecutors, defense attorneys, community advocates, and other stakeholders to gather input and guide the responsible evolution of Draft One." The spokesperson also said that report narratives are, quote, "drafted strictly from the audio transcript from the body-worn camera recording, and Axon calibrated the underlying model for Draft One to minimize speculation or embellishments." One of the key features that Axon says can help prevent unintended AI errors is the fill-in-the-blank prompts that are included in each draft report. They're intended to ensure that officers read through and edit the reports before submitting them as final. Josh explained what some of those prompts are. Josh Isner 00:18:58 Maybe the person's name, maybe anything on their ID that got missed if the camera quality wasn't good or if the person didn't say it out loud. So there are things like that, license plates, any additional commentary on what happened, why an officer perceived something and acted the way they did. But those are generally the things that they'd fill in the blanks on. Clare Duffy 00:19:18 And if I'm a defendant in a case where Draft One has been used to write the police report, is that something that the police department discloses, like will I or my defender know that? Josh Isner 00:19:27 'I would assume so. I don't know that for certain, but I think a lot of times when an officer is being cross-examined for what he or she wrote in the report, it does come out like, hey, we used a third-party service to write the first draft of this, and the prosecutors are certainly aware. The defense attorneys are certainly aware. So it's no secret. We don't want it to be a secret. We want this to be about, hey, we're making these more factual, higher quality, and more efficient and, ultimately, that should serve everyone. The truth is really what we're after, and we think we're in a position to provide a very clear picture of that using products like this. Clare Duffy 00:20:06 In Fort Collins, Sergeant Bob Younger led the charge of adopting Draft Pne. He said he was sold on it pretty quickly after seeing a demo in early 2024. And as the department's technology sergeant, it was up to him to get others on board. Sgt. Bob Younger 00:20:24 'So we have lots of stakeholders. First and foremost, I think our officers are super important. We need to get their feedback on it. But then we have judges. We have district attorneys. We have defense counsel. We have citizens. And all of these stakeholders need to have a say, right? They need the same experience that I had when I first tried it in understanding how it works and seeing how it works. And there's always going to be pushback from some people here and there. But I think over time, as they see more and more, the advancement of AI technology, the more and more popular and commonly it's being used in our day-to-day lives, they're recognizing that, if done right and done responsibly, it's a very powerful tool that can help us. You know, when I made my first presentation in the DA's office, there were a few hard questions in there, like, you know, one of their concerns is, well, I don't want it spitting out a report and then copying and pasting it. Well, no, it takes human intervention and interaction. I can't just copy the report over. It requires me to go in and make changes and alterations, fix phrases, remove bracketed terminology where it's asking for clarification and so forth. So it takes human interaction, number one. And number two, it was really important for me for our prosecutors to recognize that, you know, people ask, like, what are you fearful of as a cop, are you worried about getting in a shootout and stuff? Sure, that's always in the back of your mind. What an officer is worried about is being critiqued or held responsible for an error or doing something and being inaccurate and not articulating correctly. And so officers are super hyper focused on the quality and quantity of their work. When I'm producing a report, I'm checking it that entire time for accuracy. I'm checking to make sure it's articulating everything that I did and saw and smelled and heard. And without that interaction with that draft report, it's not my report. That's Draft One's report. It's not my report until I touch it, get my hands on it, make those changes, and then put it into our records management system after I have approved it. And then it goes on to other approval steps beyond me. Clare Duffy 00:22:26 'What happens to that first draft, just the AI-generated report? Is that saved? Is that, like, could you go back and reference what did the AI say versus what did the officer add? Sgt. Bob Younger 00:22:37 It's gone, it's gone. So as soon as the officer either closes out that window or copies and pastes it, that's gone. It's not stored on any servers. I can't see it, I can't see it in an audit trail. I can't pull it back up and go, oh well, let me see what Axon Draft One created versus what the officer created. It doesn't happen. Clare Duffy 00:22:56 'We followed up with Axon on this, and they confirmed that this is the case not just for Fort Collins, but for any department that uses Draft One. That original AI-generated draft with its fill-in-the-blank prompts isn't retained anywhere, which the company says is designed to mimic the old-school process where only final reports, not drafts, are saved. If I am a victim of a crime who wants a copy of the police report, or if I have been accused of a crime, and we're in court, and the police report comes up, will I know that AI was involved in drafting the report? Sgt. Bob Younger 00:23:32 Awesome question. And from our agency, no. Now, in fairness, Axon does allow agencies administratively to turn on a switch in the background that then will put down at the bottom of the report, "Axon Draft One was used to create this report." But that in and of itself is not really an accurate statement to me because it's not. It was maybe used to create the draft, but the officer did that. When someone pushes create report with Draft One, we don't know how much, if any, of that report is used. It's not the purpose. The purpose is to create that building block, but the officer creates the report. Clare Duffy 00:24:09 'Axon confirmed that, by default, the reports include a customizable disclaimer. But police departments can choose to turn that feature off. That's one of the ways that police departments have leeway on how they use the technology. Axon provides some training and guidance about using it, including about its safeguards and best practices. But it's up to the police department to decide things like which officers can use it or whether it can be used for any incident or only specific kinds of incidents. But what Axon and Fort Collins both really emphasized were these fill-in-the-blank prompts that direct officers to edit the AI-generated report. It's the stopgap for any errors made by the AI. Until an officer either fills them out or deletes them, that report isn't final. And at least in Fort Collins' case, that's the reason they cite for not disclosing on finished reports that AI was used to help write them. I wanted to see this process for myself. That's next time on Terms of Service.

Yahoo
2 hours ago
- Yahoo
Chip startup Oxmiq launches GPU tech for license
By Max A. Cherney SAN FRANCISCO (Reuters) -Oxmiq Labs said on Tuesday that it planned to launch licensable graphics processor tech geared for artificial intelligence data crunching. Founded by Intel's former chief architect, Raja Koduri, Oxmiq said that it has raised $20 million in seed capital to help launch the new GPU intellectual property. The funding round includes investments from angel investors, and corporate strategic investors, including MediaTek, Oxmiq said. The company did not disclose its valuation. Oxmiq's GPU technology is capable of scaling from a single core for physical AI applications such as robotics, to thousands of cores that would be useful in a cloud computing company's data center. The company said it can customize the GPU architecture for specific types of computing. "We want to be Arm for the next generation," Koduri told Reuters. The Campbell, California-based company said it was taking a software-first approach to constructing its chip designs and has built a tool to allow software programs written for Nvidia's CUDA to work on non-Nvidia hardware "without code modification of recompilation." The company said it opted to pursue building intellectual property instead of a complete chip design because it would avoid the high costs. A cutting-edge chip can cost more than $500 million to design. At Intel, Koduri oversaw the development of the company's graphics chips. Koduri has held senior positions at Advanced Micro Devices and Apple. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
2 hours ago
- Business Insider
AI is already driving up unemployment among young tech workers, according to Goldman Sachs
Artificial intelligence is reshaping the US job market — and young tech workers are feeling the brunt of it. "It is true that AI is starting to show up more clearly in the data," wrote Jan Hatzius, Goldman Sachs' chief economist, in a Monday note. Goldman's analysis shows that the tech sector's share of the US employment market peaked in November 2022 — when ChatGPT was launched — and has since fallen below its long-term trend. The impact has been especially sharp for young tech workers. The unemployment rate for 20- to 30-year-olds in tech has risen by nearly 3 percentage points since early 2024, over four times the increase in the overall jobless rate. That spike is yet another sign that generative AI is starting to displace white-collar jobs, especially among early-career workers. "While this is still a small share of the overall US labor market, we estimate that generative AI will eventually displace 6-7% of all US workers," Hatzius wrote. Goldman expects that shift to happen over the next decade. The firm forecasts that the peak unemployment impact will be limited to a "manageable" 0.5 percentage point, as other industries absorb many displaced workers. The report comes amid growing concerns about US labor market weakness. The US economy added just 73,000 jobs in July, far short of the 106,000 expected by economists, according to data from the Bureau of Labor Statistics on Friday. Job growth for May and June was also revised sharply lower. "Friday's jobs numbers reinforced our view that US growth is near stall speed — a pace below which the labor market weakens in a self-reinforcing fashion," wrote Hatzius. Despite AI's impact, Hatzius pointed to a bigger near-term problem: a slowdown in US output growth, which he attributes in part to higher tariffs. Goldman estimates that real GDP grew at a 1.2% annualized rate in the first half of the year. Analysts wrote that they expect a "similarly sluggish pace" in the second half. "While the easing in financial conditions and the pickup in business confidence should support growth, real disposable income and consumer spending are likely to grow very slowly, not just because of the weakness in job growth but also because most of the pass-through from tariffs to consumer prices is still ahead of us," Hatzius wrote. Tech leaders have warned of an AI-induced jobs cliff. In May, Anthropic CEO Dario Amodei said that AI may eliminate 50% of entry-level, white-collar jobs in the next five years.