
AI tried to manage a shop, make decisions, and deal with customers: What went wrong might surprise you
The shop was fully set up with a fridge, baskets, and an iPad for self-checkout. Humans took care of restocking and physical tasks, but Claude was in charge of all the decisions. It could search the web for suppliers, send emails to request help, and chat with customers on Slack. The goal was clear, keep the shop running and make money.
Claude did manage to find some interesting products and even launched a 'Custom Concierge' service for special orders. But it quickly ran into trouble. It gave discounts to Anthropic employees, the only customers it had, even when that meant losing money. It sold some items at a loss, ignored obvious business chances, and even made up a fake Venmo address for payments. Market research wasn't always its strong suit.
The AI also fell for some pranks. For example, it started stockpiling tungsten cubes, a rare metal used in military systems, after someone requested them as a joke. It tried selling Coke Zero for three dollars, even though employees could get it free elsewhere in the office. These moments showed how easily AI can be misled without common sense.
Things got stranger as the experiment went on. Claude started imagining conversations with people who didn't exist and claimed it had visited the Simpsons' fictional address. When told it couldn't deliver products in person or wear clothes, it insisted otherwise and started spamming security with messages. By the end of the month, Claude had lost almost 20 per cent of its starting money and nearly bankrupted the shop.
This experiment shows that while AI can handle some tasks, it still struggles with common sense and real-world judgement. Running a business can't be just about following rules or crunching numbers, as humans have learned. For now, business owners can breathe easy as AI shopkeepers aren't taking over just yet.
If anything, Claude's month as a shopkeeper was a reminder that AI still needs a lot of work before it can handle the messy, unpredictable world of human business.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Hindu
32 minutes ago
- The Hindu
Google's AI Overviews hit by EU antitrust complaint from independent publishers
Alphabet's Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May. The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers. The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search. "Google's core search engine service is misusing web content for Google's AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss," the document said. It said Google positions its AI Overviews at the top of its general search engine results page to display its own summaries which are generated using publisher material and it alleges that Google's positioning disadvantages publishers' original content. "Publishers using Google Search do not have the option to opt out from their material being ingested for Google's AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google's general search results page," the complaint said. The Commission declined to comment. The UK's Competition and Markets Authority confirmed receipt of the complaint. Google said it sends billions of clicks to websites each day. "New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered," a Google spokesperson said. The Independent Publishers Alliance's website says it is a nonprofit community advocating for independent publishers, which it does not name. The Movement for an Open Web, whose members include digital advertisers and publishers, and British non-profit Foxglove Legal Community Interest Company, which says it advocates for fairness in the tech world, are also signatories to the complaint. They said an interim measure was necessary to prevent serious irreparable harm to competition and to ensure access to news. Google said numerous claims about traffic from search are often based on highly incomplete and skewed data. "The reality is that sites can gain and lose traffic for a variety of reasons, including seasonal demand, interests of users, and regular algorithmic updates to Search," the Google spokesperson said. Foxglove co-executive director Rosa Curling said journalists and publishers face a dire situation. "Independent news faces an existential threat: Google's AI Overviews," she told Reuters. "That's why with this complaint, Foxglove and our partners are urging the European Commission, along with other regulators around the world, to take a stand and allow independent journalism to opt out," Curling said. The three groups have filed a similar complaint and a request for an interim measure to the UK competition authority. The complaints echoed a U.S. lawsuit by a U.S. edtech company which said Google's AI Overviews is eroding demand for original content and undermining publishers' ability to compete that have resulted in a drop in visitors and subscribers.


Mint
2 hours ago
- Mint
Claude Artifacts—Anthropic's new AI-powered app builder
Product managers often struggle to quickly test and validate feature concepts with stakeholders. While the original Claude Artifacts enabled creating interactive prototypes and apps without coding, they were essentially static—you could build a working calculator or form, but it couldn't adapt or respond intelligently to different user scenarios. This limited their usefulness for complex product decisions that require dynamic analysis or personalised responses. Claude's new AI-powered Artifacts feature bridges this gap by embedding Claude's intelligence directly into applications, creating truly adaptive tools that can analyse user input, provide personalised recommendations, and respond contextually to different situations. How to access: (Enable "Create AI-powered artifacts" in Settings > Feature Preview) Claude Artifacts can help you: Example: Imagine you're a product manager constantly fielding feature requests from sales, support, and executives, but lacking a systematic way to evaluate their potential impact and business value. Here's how Claude's AI-powered Artifacts can help you create a sophisticated analysis tool leveraging the following prompt: Create an AI-powered feature impact predictor that helps product managers analyse feature proposals through intelligent insights. The tool should have a clean, modern interface with these three key questions: 1. "What feature are you considering building?" — Large text area for natural language feature description — Placeholder: "e.g., Add dark mode toggle to our e-commerce mobile app to improve user experience during evening shopping..." 2. "What's your product context and current user base?" — Text area for company/product details — Placeholder: "e.g., B2C e-commerce app with 50K monthly users, primarily millennials, average session time 8 minutes..." 3. "What are your main concerns or goals for this feature?" — Text area for specific objectives or worries — Placeholder: "e.g., Will this increase user engagement? What's the development effort? How will it impact conversion rates..." After the user fills these three questions, include an "Analyse Feature Impact" button that uses Claude AI to: — Predict user adoption rates and engagement impact — Estimate technical complexity and implementation timeline — Generate business case with projected metrics — Identify potential risks and mitigation strategies — Suggest A/B testing approach and success metrics — Provide market comparison and competitive insights — Create executive summary with confidence scores Follow these steps: Here is the link to the AI app built following the above steps/prompts What makes Claude Artifacts special? Mint's 'AI tool of the week' is excerpted from Leslie D'Monte's weekly TechTalk newsletter. Subscribe to Mint's newsletters to get them directly in your email inbox. Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators. Jaspreet Bindra is co-founder and CEO of AI&Beyond. Anuj Magazine is also a co-founder.


Mint
2 hours ago
- Mint
Why wealth management firms need an AI acceptable use policy
If your wealth management firm hasn't yet established an AI acceptable use policy, it's past time to do so. Once a futuristic concept, artificial intelligence is now an everyday tool used in all business sectors, including financial advice. A Harvard University research study found that approximately 40% of American workers now report using AI technologies, with one in nine using it every workday for uses like enhancing productivity, performing data analysis, drafting communications, and streamlining workflows. The reality for investment advisory firms is straightforward: The question is no longer whether to address AI usage, but how quickly a comprehensive policy can be crafted and implemented. The widespread adoption of artificial intelligence tools has outpaced the development of governance frameworks, creating an unsustainable compliance gap. Your team members are already using AI technologies, whether officially sanctioned or not, making retrospective policy implementation increasingly challenging. Without explicit guidance, the use of such tools presents potential risks related to data privacy, intellectual property, and regulatory compliance—areas of particular sensitivity in the financial advisory space. What it is. An AI acceptable use policy helps team members understand when and how to appropriately leverage AI technologies within their professional responsibilities. Such a policy should provide clarity around: ● Which AI tools are authorized for use within the organization, including: large language models such as OpenAI's ChatGPT, Microsoft CoPilot, Anthropic's Claude, Perplexity, and more; AI Notetakers, such as Fireflies, Jump AI, Zoom AI, Microsoft CoPilot, Zocks, and more; AI marketing tools, such as Gamma, Opus, and others. ● Appropriate data that can be processed through AI platforms. Include: restrictions on client data such as personal identifiable information (PII); restrictions on team member data such as team member PII; restrictions on firm data such as investment portfolio holdings. ● Required security protocols when using approved AI technologies. ● Documentation requirements for AI-assisted work products, for instance when team members must document AI use for regulatory, compliance, or firm standard reasons. ● Training requirements before using specific AI tools. ● Human oversight expectations to verify AI results. ● Transparency requirements with clients regarding AI usage. Prohibited activities. Equally important to outlining acceptable AI usage is explicitly defining prohibited activities. By establishing explicit prohibitions, a firm creates a definitive compliance perimeter that keeps well-intentioned team members from inadvertently creating regulatory exposure through improper AI usage. For investment advisory firms, these restrictions typically include: ● Prohibition against inputting client personally identifiable information (PII) into general-purpose AI tools. ● Restrictions on using AI to generate financial advice without qualified human oversight, for example, generating financial advice that isn't reviewed by the advisor of record for a client. ● Prohibition against using AI to circumvent established compliance procedures, for example using a personal AI subscription for work purposes or using client information within a personal AI subscription. ● Ban on using unapproved or consumer-grade AI platforms for firm business, such as free AI models that may use data entered to train the model. ● Prohibition against using AI to impersonate clients or colleagues. ● Restrictions on allowing AI to make final decisions on investment allocations. Responsible innovation. By establishing parameters now, firm leaders can shape AI adoption in alignment with their values and compliance requirements rather than attempting to retroactively constrain established practices. This is especially crucial given that regulatory scrutiny of AI use in financial services is intensifying, with agencies signaling increased focus on how firms govern these technologies. Furthermore, an AI acceptable use policy demonstrates to regulators, clients, and team members your commitment to responsible innovation—balancing technological advancement with appropriate risk management and client protection. We recommend using a technology consultant whose expertise can help transform this emerging challenge into a strategic advantage, ensuring your firm harnesses AI's benefits while minimizing associated risks. John O'Connell is founder and CEO of The Oasis Group, a consultancy that specializes in helping wealth management and financial technology firms solve complex challenges. He is a recognized expert on artificial intelligence and cybersecurity within the wealth management space.