
Colorado AI bill set for overhaul as the clock ticks
Why it matters: Other states are closely watching the lawmaking as a model for regulating AI and ensuring privacy in the bot era.
Catch up quick: Colorado's current law — which takes effect next February — requires consumer disclosure when AI is being used and prevents discrimination in decision-making.
It applies to predictive artificial intelligence systems that make decisions, not generative ones such as ChatGPT.
Yes, but: The governor and tech industry argued the existing law went too far, saying it would stifle innovation, job growth and startup companies with all its demands on AI companies.
The latest: A bill introduced Monday — just days before lawmakers adjourn the session — rewrites some rules to assuage the industry's fears.
The legislation more clearly outlines the rules for consumer disclosure, adjusts the definition of discrimination to fit existing law and curtails some of the responsibilities of the AI company and those who deploy the software.
The new rules would exempt smaller companies under 500 employees, rather than the current benchmark of 50.
The other side: The changes didn't satisfy all the bill's critics and made the rules tougher in some areas, Chris Erickson, co-founder and managing partner at Range Ventures, a venture capital firm, tells us.
The change "we were told is going to happen hasn't happened yet," he said.
Bryan Leach, CEO and founder of Ibotta, a digital coupon company, echoed those concerns.
"The bill substantially heightens the costs and administrative burdens on small businesses," he said in a statement to Axios Denver. "If passed, this bill will only exacerbate the damage to our reputation as a business-friendly state and our ability to continue to create jobs."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tom's Guide
an hour ago
- Tom's Guide
I tested ChatGPT Agent on 5 everyday tasks — here's what happened
Since the announcement of ChatGPT Agent, I'd been eagerly awaiting the model to show up in my menu of tools. ChatGPT Plus users should all have the new model now. If you don't see it, try logging out and back in it appeared, I just had to know, could the AI actually do things like cancel subscriptions, plan a family trip and order my lunch? To find out, I tested ChatGPT Agent with five very real tasks from my everyday life. Some were impressive. Others were frustrating. But all of them offered a glimpse into what the future of AI-powered assistance might actually look like. Here's what happened when I put ChatGPT Agent to work. Prompt: 'Help me find the Big Into Energy Labubu near me.'I've already found the Big into Energy Labubu with Google Search, but I wanted to see if using ChatGPT Agent was any easier. As one of the hottest toys on the market, it's nearly impossible to track one of these things followed up questioning whether I wanted more information or to purchase one. In this case, I said purchase because information is much easier to find. From there, the AI went to work. I could actually see it checking various websites, reading information, thinking and more within the chat window. In six minutes, the Agent found the Labubu I requested, added it to the cart, and headed to the checkout. It then asked for my shipping address and credit card information. I was able to take control of ChatGPT's browser to finalize the sale. Verdict: This was much faster and easier than using Google. I will definitely be using ChatGPT Agent in the future for hard-to-find items. Prompt:"Plan a 4-day family trip to San Diego, including flights from Newark, hotel options with a pool, and activities for kids under 10. Book everything using my Google account and save an itinerary in Google Docs." When I attempted to move on to the second task, my entire computer crashed and I got an error. After logging back in again, I was able to start a new prompt was a very ambitious task, but I decided to go big to really see what the ChatGPT Agent could do. The AI truly flexed its potential and its limits with this one. The agent searched Google Flights, compared hotels with kid-friendly amenities, and listed family attractions like the San Diego Zoo, LEGOLAND, and beach days. It created a beautiful daily itinerary in Docs and even embedded links to book the trip. Luckily, booking required my manual approval. For privacy and safety reasons (thankfully), the agent doesn't auto-purchase flights or rooms. Instead, it pre-filled forms and waited for me to hit "confirm." Honestly, I don't think I could ever let AI handle this part for me. I'm too much of a control freak when it comes to vacations and I don't trust AI. Verdict: An incredible planning tool and I think it's fun to see the AI 'working' through the prompt. I don't think that will ever get old. Prompt: "Create a 5-day healthy dinner meal plan under 600 calories per meal. Then generate a grocery list, check prices at my local ShopRite, and export it to a spreadsheet." Meal planning is one of those chores that sounds easy until it's 5 p.m. and your fridge is full of random ingredients that don't go together. ChatGPT Agent solved that problem fast. It generated five balanced dinner recipes (lemon herb chicken, veggie stir-fry, ravioli, etc.) with simple ingredients and clear instructions. Then it built a grocery list categorized by section (produce, pantry, dairy), checked local prices through Instacart data, and exported everything to Google Sheets. It even told me which ingredients were on sale nearby. I emailed the list to myself to use next time I go shopping. Verdict: I often refer to AI as a 'game-changer' sometimes ad nauseum, but it's so wildly helpful for things like this, that I can't think of a better descriptor. This task felt like having a personal nutritionist and assistant rolled into one. 10/10 will use again. Prompt: "Order me a chicken Caesar wrap and a lemonade from DoorDash." Luckily, ChatGPT didn't crash after the first time and I was ready to order lunch. Immediately arriving on DoorDash, the AI asked me to log in. I wanted to make this as 'hands off' as possible so I told it to use the site as a guest. I had read in several forums that ordering food had caused some users to get frustrated at ChatGPT Agent. So I was prepared for things to go awry. The only hiccup I had was the AI not knowing my zip code because it signed in as a guest. Once I told it, everything went ordered my lunch and then handed everything over to me when it was time to pay and enter my address. Because I hadn't specified a restaurant, I was impressed by ChatGPT's ability to find a chicken Caesar wrap on its More of a helpful sidekick than a hands-free solution. This wasn't much of a time saver and I probably will order lunch for myself without using ChatGPT Agent. Prompt: 'I have to renew my license and get a real ID. Can you book an appointment for me at my local Department of Motor Vehicles?'When I got a notice in the mail the other day that I couldn't renew my license online, I got that feeling of dread in the pit of my stomach. The thought of spending hours at my local Motor Vehicle office especially in the summer, was overwhelming. But in 19 seconds, ChatGPT pulled up everything I needed to book an appointment and I was all set. Zero hassle. Verdict: Fast, smart and surprisingly effective for a niche errand. ChatGPT Agent isn't perfect, but it's one of the most capable AI tools I've tested to date. It won't replace your human assistant just yet, but if you're like me and don't have a human assistant to begin with, it's the next best thing. It can handle real, time-consuming tasks that go far beyond answering questions or summarizing PDFs. If you're already using ChatGPT Plus or Team, it's absolutely worth trying — just be prepared to step in here and there and occasionally restart the app completely. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Bloomberg
7 hours ago
- Bloomberg
Alibaba Cloud Visionary Expects Big Shakeup After OpenAI Hype
OpenAI 's ChatGPT started a revolution in artificial intelligence development and investment. Yet nine-tenths of the technology and services that've sprung up since could be gone in under a decade, according to the founder of Alibaba Group Holding Ltd. 's cloud and AI unit. The problem is the US startup, celebrated for ushering AI into the mainstream, created 'bias' or a skewed understanding of what AI can do, Wang Jian told Bloomberg Television. It fired the popular imagination about chatbots, but the plethora of applications for AI goes far beyond that. Developers need to cut through the noise and think creatively about applications to propel the next stage of AI development, said Wang, who built Alibaba's now second-largest business from scratch in 2009.


Forbes
9 hours ago
- Forbes
OpenAI: ChatGPT Wants Legal Rights. You Need The Right To Be Forgotten.
As systems like ChatGPT move toward achieving legal privilege, the boundaries between identity, ... More memory, and control are being redefined, often without consent. When OpenAI CEO Sam Altman recently stated that conversations with ChatGPT should one day enjoy legal privilege, similar to those between a patient and a doctor or a client and a lawyer, he wasn't just referring to privacy. He was pointing toward a redefinition of the relationship between people and machines. Legal privilege protects the confidentiality of certain relationships. What's said between a patient and physician, or a client and attorney, is shielded from subpoenas, court disclosures, and adversarial scrutiny. Extending that same protection to AI interactions means treating the machine not as a tool, but as a participant in a privileged exchange. This is more than a policy suggestion. It's a legal and philosophical shift with consequences no one has fully reckoned with. It also comes at a time when the legal system is already being tested. In The New York Times' lawsuit against OpenAI, the paper has asked courts to compel the company to preserve all user prompts, including those the company says are deleted after 30 days. That request is under appeal. Meanwhile, Altman's suggestion that AI chats deserve legal shielding raises the question: if they're protected like therapy sessions, what does that make the system listening on the other side? People are already treating AI like a confidant. According to Common Sense Media, three in four teens have used an AI chatbot, and over half say they trust the advice they receive at least somewhat. Many describe a growing reliance on these systems to process everything from school to relationships. Altman himself has called this emotional over-reliance 'really bad and dangerous.' But it's not just teens. AI is being integrated into therapeutic apps, career coaching tools, HR systems, and even spiritual guidance platforms. In some healthcare environments, AI is being used to draft communications and interpret lab data before a doctor even sees it. These systems are present in decision-making loops, and their presence is being normalized. This is how it begins. First, protect the conversation. Then, protect the system. What starts as a conversation about privacy quickly evolves into a framework centered on rights, autonomy, and standing. We've seen this play out before. In U.S. law, corporations were gradually granted legal personhood, not because they were considered people, but because they acted as consistent legal entities that required protection and responsibility under the law. Over time, personhood became a useful legal fiction. Something similar may now be unfolding with AI—not because it is sentient, but because it interacts with humans in ways that mimic protected relationships. The law adapts to behavior, not just biology. The Legal System Isn't Ready For What ChatGPT Is Proposing There is no global consensus on how to regulate AI memory, consent, or interaction logs. The EU's AI Act introduces transparency mandates, but memory rights are still undefined. In the U.S., state-level data laws conflict, and no federal policy yet addresses what it means to interact with a memory‑enabled AI. (See my recent Forbes piece on why AI regulation is effectively dead—and what businesses need to do instead.) The physical location of a server is not just a technical detail. It's a legal trigger. A conversation stored on a server in California is subject to U.S. law. If it's routed through Frankfurt, it becomes subject to GDPR. When AI systems retain memory, context, and inferred consent, the server location effectively defines sovereignty over the interaction. That has implications for litigation, subpoenas, discovery, and privacy. 'I almost wish they'd go ahead and grant these AI systems legal personhood, as if they were therapists or clergy,' says technology attorney John Kheit. 'Because if they are, then all this passive data collection starts to look a lot like an illegal wiretap, which would thereby give humans privacy rights/protections when interacting with AI. It would also, then, require AI providers to disclose 'other parties to the conversation', i.e., that the provider is a mining party reading the data, and if advertisers are getting at the private conversations.' Infrastructure choices are now geopolitical. They determine how AI systems behave under pressure and what recourse a user has when something goes wrong. And yet, underneath all of this is a deeper motive: monetization. But they won't be the only ones asking questions. Every conversation becomes a four-party exchange: the user, the model, the platform's internal optimization engine, and the advertiser paying for access. It's entirely plausible for a prompt about the Pittsburgh Steelers to return a response that subtly inserts 'Buy Coke' mid-paragraph. Not because it's relevant—but because it's profitable. Recent research shows users are significantly worse at detecting unlabeled advertising when it's embedded inside AI-generated content. Worse, these ads are initially rated as more trustworthy until users discover they are, in fact, ads. At that point, they're also rated as more manipulative. 'In experiential marketing, trust is everything,' says Jeff Boedges, Founder of Soho Experiential. 'You can't fake a relationship, and you can't exploit it without consequence. If AI systems are going to remember us, recommend things to us, or even influence us, we'd better know exactly what they remember and why. Otherwise, it's not personalization. It's manipulation.' Now consider what happens when advertisers gain access to psychographic modeling: 'Which users are most emotionally vulnerable to this type of message?' becomes a viable, queryable prompt. And AI systems don't need to hand over spreadsheets to be valuable. With retrieval-augmented generation (RAG) and reinforcement learning from human feedback (RLHF), the model can shape language in real time based on prior sentiment, clickstream data, and fine-tuned advertiser objectives. This isn't hypothetical—it's how modern adtech already works. At that point, the chatbot isn't a chatbot. It's a simulation environment for influence. It is trained to build trust, then designed to monetize it. Your behavioral patterns become the product. Your emotional response becomes the target for optimization. The business model is clear: black-boxed behavioral insight at scale, delivered through helpful design, hidden from oversight, and nearly impossible to detect. We are entering a phase where machines will be granted protections without personhood, and influence without responsibility. If a user confesses to a crime during a legally privileged AI session, is the platform compelled to report it or remain silent? And who makes that decision? These are not edge cases. They are coming quickly. And they are coming at scale. Why ChatGPT Must Remain A Model—and Why Humans Must Regain Consent As generative AI systems evolve into persistent, adaptive participants in daily life, it becomes more important than ever to reassert a boundary: models must remain models. They cannot assume the legal, ethical, or sovereign status of a person quietly. And the humans generating the data that train these systems must retain explicit rights over their contributions. What we need is a standardized, enforceable system of data contracting, one that allows individuals to knowingly, transparently, and voluntarily contribute data for a limited, mutually agreed-upon window of use. This contract must be clear on scope, duration, value exchange, and termination. And it must treat data ownership as immutable, even during active use. That means: When a contract ends, or if a company violates its terms, the individual's data must, by law, be erased from the model, its training set, and any derivative products. 'Right to be forgotten' must mean what it says. But to be credible, this system must work both ways: This isn't just about ethics. It's about enforceable, mutual accountability. The user experience must be seamless and scalable. The legal backend must be secure. And the result should be a new economic compact—where humans know when they're participating in AI development, and models are kept in their place. ChatGPT Is Changing the Risk Surface. Here's How to Respond. The shift toward AI systems as quasi-participants—not just tools—will reshape legal exposure, data governance, product liability, and customer trust. Whether you're building AI, integrating it into your workflows, or using it to interface with customers, here are five things you should be doing immediately: ChatGPT May Get Privilege. You Should Get the Right to Be Forgotten. This moment isn't just about what AI can do. It's about what your business is letting it do, what it remembers, and who gets access to that memory. Ignore that, and you're not just risking privacy violations, you're risking long-term brand trust and regulatory blowback. At the very least, we need a legal framework that defines how AI memory is governed. Not as a priest, not as a doctor, and not as a partner, but perhaps as a witness. Something that stores information and can be examined when context demands it, with clear boundaries on access, deletion, and use. The public conversation remains focused on privacy. But the fundamental shift is about control. And unless the legal and regulatory frameworks evolve rapidly, the terms of engagement will be set, not by policy or users, but by whoever owns the box. Which is why, in the age of AI, the right to be forgotten may become the most valuable human right we have. Not just because your data could be used against you—but because your identity itself can now be captured, modeled, and monetized in ways that persist beyond your control. Your patterns, preferences, emotional triggers, and psychological fingerprints don't disappear when the session ends. They live on inside a system that never forgets, never sleeps, and never stops optimizing. Without the ability to revoke access to your data, you don't just lose privacy. You lose leverage. You lose the ability to opt out of prediction. You lose control over how you're remembered, represented, and replicated. The right to be forgotten isn't about hiding. It's about sovereignty. And in a world where AI systems like ChatGPT will increasingly shape our choices, our identities, and our outcomes, the ability to walk away may be the last form of freedom that still belongs to you.