
Phoebe Gates Taps ChatGPT To Power Viral Growth Of Fashion Startup Phia
Phoebe Gates, daughter of tech giant Bill Gates, is carving out her own space in the fashion technology world. Her startup, Phia, is an AI-driven shopping assistant designed to help users find the best deals on clothing and accessories by scanning millions of listings across popular resale and retail platforms.
What makes Phia stand out is not just its tech — it also uses ChatGPT to fuel creative marketing efforts. Gates and her co-founder Sophia Kianni revealed how they studied viral TikTok videos using AI to understand what makes content popular. This insight helped them craft engaging videos that boosted Phia's visibility rapidly, as reported by Business Insider.
Phia aggregates listings from websites like eBay, Poshmark and The RealReal, offering users instant feedback on whether a price is a good deal with a simple "Should I Buy This?" button. This feature helps shoppers navigate the growing secondhand market with confidence, according to The Verge.
Despite her famous last name, Phoebe insists she built Phia without financial help from her parents. On their podcast, ''The Burnouts,'' she and Kianni shared their focus on sustainability and smart shopping, highlighting their desire to create an app that supports eco-conscious consumers, reported People.
By combining innovative AI tools with savvy social media strategies, Phoebe Gates and her team have positioned Phia as an exciting new player in online retail, especially among young shoppers who care about sustainability and value.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Int'l Business Times
7 hours ago
- Int'l Business Times
AI Is Learning To Lie, Scheme, And Threaten Its Creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals AFP


Int'l Business Times
3 days ago
- Int'l Business Times
Housing Market Softens, With Rising Inventory And Stabilizing Demand
U.S. housing market dynamics are shifting this June, with growing inventory levels and stabilizing buyer interest beginning to alter long-standing conditions in the market. New home sales dropped 13.7% in May to a seasonally adjusted pace of 623,000 units—marking the lowest level since October 2024. This sharp decline was most pronounced in the South and Midwest, according to Business Insider, reflecting increasing affordability challenges faced by buyers. Meanwhile, homebuilders are reporting approximately 507,000 newly constructed homes ready for sale—consistent with a supply buffer of nearly 10 months, as noted by the same source. Despite cooling in the new-construction segment, signs of recovery are emerging in the existing-home market. The Pending Home Sales Index—measuring signed contracts on existing homes—increased 1.8% in May to 72.6, surpassing analyst expectations, as reported by Reuters. All four U.S. regions showed gains, driven primarily by strong job and wage growth, though high mortgage rates continue to impede broader affordability. One key relief factor is a recent dip in borrowing costs. The 30‑year fixed-rate mortgage rate eased to roughly 6.77%, its lowest in seven weeks, supported by renewed hopes for a Federal Reserve rate cut this summer, according to Reuters in the same report. The combination of these trends—ample new and resale inventory, alongside slightly lower mortgage costs—suggests a gradual shift toward a more balanced housing market after years of overheating. First-time buyers, in particular, may find renewed opportunities if downward pressure on rates continues. Still, median home prices remain stubbornly high, keeping the market challenging for many. Analysts warn that true affordability hinges not just on rate declines but also on continued wage growth and manageable inflation.


Int'l Business Times
5 days ago
- Int'l Business Times
US Judge Backs Using Copyrighted Books To Train AI
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query. The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading of books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital format, according to court documents. Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.