
Who is Mira Murati, the AI expert who rejected Mark Zuckerberg's ₹83,00,00,00,000 Meta offer to build world's most powerful AI?
Still, Murati's account stands. She says Meta tried to acquire Thinking Machines, just as it had attempted with ScaleAI before. The attempt failed.Born in 1988 in Vlorë, Albania, Murati's early life was shaped by a changing political landscape. At 16, she won a scholarship to Pearson College UWC in British Columbia, Canada, a school known for promoting global citizenship and critical thinking.That experience laid the foundation for her future. After finishing the International Baccalaureate programme, she pursued a dual academic path: a Bachelor of Arts at Colby College in 2011 and a Bachelor of Engineering from Dartmouth's Thayer School in 2012.That blend of liberal arts and engineering proved essential in shaping how she thinks about technology, people and the world.Murati started her career at Zodiac Aerospace, then moved to Tesla where she worked on the Model X as a senior product manager. From there, she joined Leap Motion (now Ultraleap), where she explored gesture-based computing and augmented reality.
But her defining chapter began in 2018 when she joined OpenAI as Vice President of Applied AI and Partnerships. By 2022, she was promoted to Chief Technology Officer. Under her leadership, OpenAI launched groundbreaking tools like ChatGPT, DALL·E, Codex and Sora. In November 2023, during a leadership crisis at OpenAI, Murati briefly stepped in as interim CEO after Sam Altman was removed. She was one of the senior leaders who questioned his management. Although Altman returned within days, Murati eventually left the company in September 2024 to start her own venture.Founded in February 2025, Thinking Machines Lab is an AI public benefit startup that aims to build general-purpose, accessible and ethical AI systems. It raised $2 billion in seed funding by July — the largest seed round in tech history — at a $12 billion valuation.
The startup is backed by names like Nvidia, AMD, Accel, Cisco, ServiceNow and the Albanian government. The company has hired talent from OpenAI, Meta and French AI firm Mistral. Although it hasn't released a product yet, industry insiders are watching closely.A source told Wired the company is developing AI systems that could help tackle major global problems such as disease, climate change and inequality. And unlike many other players, Murati's approach includes bringing experts from outside the AI world — scientists, policymakers and researchers — into the conversation early.What makes this story even more unusual is that, in the high-stakes world of AI, talent often follows money. But not in Murati's case.As per India.com, the team values independence and sees the mission of Thinking Machines Lab as larger than any corporate payout.They want to build AI from the ground up, in a way that reflects their values — not someone else's roadmap.Murati's influence hasn't gone unnoticed. She was named in Time's 100 Most Influential People in AI in 2024 and in Fortune's Most Powerful Women in Business in 2023.
Her company's funding success is also rare in a sector where female-led startups still receive only a fraction of venture capital. According to Female Founders Fund, just 2.1% of VC money last year went to startups founded solely by women. And yet Murati secured $2 billion. Her startup has no product, but a clear vision, an elite team, and the trust of heavyweight investors.This isn't just about turning down money. It's about who gets to shape the future of AI.Murati has made it clear that she's not just building tools. She's building a framework for how AI should work — inclusive, transparent and accountable. In her own story, there's a message to young technologists, especially women: that you don't need to follow the usual path to lead the next revolution.So when Zuckerberg came knocking with ₹8,300 crore, she had an answer ready. No. Because some visions aren't for sale.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
23 minutes ago
- Indian Express
ChatGPT chats were showing up on Google, but OpenAI says its all good now
ChatGPT users may have unintentionally shared their private conversations with millions of others, as Google and other search engines were recently found indexing chats that were shared with others. From understanding astrophysics to asking for advice to improve mental health, conversations with ChatGPT were showing up on Google if you added the term 'site: to your search query. While most of these conversations were uninteresting and simple, some of them were very personal. For example, several chats which discussed issues like mental health, sex life, career advice, addiction, physical abuse and other serious topics could be brought up with a simple Google search. While ChatGPT does not make conversations public by default, chats that were shared using the 'Share' button in the app and website were indexed by search engines like Google. For those unaware, ChatGPT's share feature creates a link, which you can send to others over WhatsApp, Instagram, or Facebook so they can view your conversation with the AI chatbot. The feature also allows people who have the link to continue the conversation with ChatGPT if they want to. One thing to note is that even if you delete the link, it might still be visible on cached pages. First discovered by Fast Company, the publication says that around 4,500 conversations were visible on Google site search, but most of them did not contain personally identifiable information. However, if someone included a name or phrase in a chat, it may show up in search results. The publication goes on to say that the number of leaked chats is probably much higher than the stated figure, as Google may not index all conversations. In a statement to PCMag, a Google spokesperson said that OpenAI is responsible for making these chats visible to search engines. We just removed a feature from @ChatGPTapp that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations. This feature required users to opt-in, first by picking a chat… — DANΞ (@cryps1s) July 31, 2025 In a post on X, an OpenAI employee said that they have now removed the feature that allowed users to make their conversations discoverable by search engines. He went on to say that this 'was a short-lived experiment' and that it 'introduced too many opportunities for folks to accidentally share things they didn't intend to.' OpenAI's Shared Links FAQ states that ChatGPT chats are not made public unless you tick the 'Make this chat discoverable' option when using the AI chatbot's built-in share feature. As it turns out, many ChatGPT users may have accidentally ticked this box, thinking it necessary to do so if they want their conversation to be visible to others. In case you are wondering, these shared links can be deleted at any time. To do so, head over to ChatGPT settings and, under the 'Data controls' section, click the 'Manage' button that appears to the right of the 'Shared links' option. Here, you will see how many chats you have shared publicly and the option to delete these links. OpenAI CEO Sam Altman had also said that since there is currently no legal or policy framework for AI, users should not expect any legal confidentiality for their conversations with ChatGPT. This means that none of your chats are actually private and OpenAI might be forced to share your conversation with the AI chatbot in court if asked to do so.


Time of India
an hour ago
- Time of India
Think before you ask: Why ChatGPT legal queries can be used against you as court evidence
ChatGPT may be quick and convenient, but using it for legal questions could backfire in serious ways. Many users aren't aware that anything they type into the chatbot, even deleted messages, can be retained and used as evidence in legal proceedings. Unlike lawyers, AI tools are not bound by confidentiality or ethical obligations. This means that sharing sensitive legal concerns with a chatbot doesn't just offer unreliable advice; it may also create a discoverable digital trail. Before you confide in AI, it is important to understand the risks and why human legal counsel is still essential. Your ChatGPT conversations are not legally confidential In a recent appearance on the This Past Weekend podcast hosted by comedian Theo Von, OpenAI CEO Sam Altman made a candid admission: conversations with ChatGPT are not protected under any kind of legal privilege. "Right now, if you talk to a therapist or a lawyer or a doctor, there's legal privilege for it," Altman explained. "There's doctor-patient confidentiality, there's legal confidentiality. And we haven't figured that out yet for when you talk to ChatGPT. " This means if you type out a sensitive legal scenario, say, describing an incident that might amount to a crime or seeking strategic legal advice, that chat can potentially be disclosed in court. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Libas Purple Days Sale Libas Undo According to Altman, OpenAI could be legally compelled to hand over your conversations, even if they've been deleted. The consequences of this are serious. Legal experts like Jessee Bundy from Creative Counsel have warned users not to mistake AI for actual legal representation. "If you're pasting in contracts, asking legal questions, or asking [the chatbot] for strategy, you're not getting legal advice," Jessee E. Bundy posted on X (formerly Twitter). "You're generating discoverable evidence. No attorney-client privilege. No confidentiality. No ethical duty. No one to protect you." She added that ChatGPT may feel private and helpful, but unlike a licensed attorney, it has no legal obligation to act in your best interest, and it can't be held accountable for any incorrect advice it generates. AI-Generated legal advice isn't actually legal advice When Malte Landwehr, CEO of an AI company, suggested that ChatGPT could still provide useful legal input even if it's not confidential, Bundy strongly pushed back. 'ChatGPT can't give you legal advice,' she replied. 'Legal advice comes from a licensed professional who understands your specific facts, goals, risks, and jurisdiction. And is accountable for it. ChatGPT is a language model. It generates words that sound right based on patterns, but it doesn't know your situation, and it's not responsible if it's wrong.' Calling it 'legal Mad Libs,' Bundy stressed that relying on ChatGPT for legal issues is both risky and potentially self-incriminating. Deleted chats with AI aren't safe from legal scrutiny User conversations with AI chatbots, including those that have been deleted, may still be stored and subject to disclosure in legal proceedings. As highlighted by ongoing litigation, some companies are required to retain chat records, which could be subpoenaed in court. This includes potentially sensitive or personal exchanges. At present, there is no legal obligation for AI platforms to treat user chats as confidential in the same way communications with a lawyer or therapist are protected. Until laws are updated to account for AI interactions, users should be aware that anything typed into a chatbot could, in some cases, be used as evidence. Why it's best to speak with a human lawyer instead of ChatGPT For legal concerns, whether it's about a contract, criminal matter, or a rights dispute, it's essential to consult a licensed professional. Unlike AI, lawyers are bound by strict confidentiality, legal privilege, and ethical duties. AI-generated responses may feel private and helpful, but they are not protected, verified, or accountable. While it may be tempting to turn to AI for convenience, doing so for legal issues could expose you to unnecessary risk. As artificial intelligence becomes more common in everyday use, it's important to recognise its limitations, especially in areas involving legal or personal stakes. Conversations with AI are not protected under legal privilege, and in the eyes of the law, they can be accessed like any other form of communication. Until privacy and legal frameworks are in place for AI, it's safest to avoid using chatbots for legal questions. For advice you can trust and that will remain confidential, always consult a qualified legal professional. Also Read: Microsoft reveals AI chatbots are rapidly impacting 40 jobs like writers, translators and more; is yours on the list


Business Standard
an hour ago
- Business Standard
Wall Street Slips Despite Strong Tech Earnings as Trade Tensions and Sector Weakness Weigh on Market
U.S. indices closed lower after initial gains driven by Meta and Microsoft earnings faded. Trade uncertainty, weak semiconductor and pharma stocks and global market declines dampened investor sentiment. The Nasdaq edged down 7.23 points or less than a tenth of a percent to 21,122.45, the S&P 500 fell 23.51 points or 0.4 percent to 6,339.39 and the Dow slid 330.30 points or 0.7 percent to 44,140.98. Wall Street opened strong following upbeat earnings from tech giants Meta Platforms and Microsoft. Meta surged 11.3% after surpassing Q2 expectations and offering positive Q3 guidance, while Microsoft rose 4% on better-than-expected fiscal Q4 results. However, early gains faded as profit-taking kicked in after the Nasdaq and S&P 500 hit record highs. Market sentiment was also influenced by ongoing trade tensions, with President Trump imposing a 15% tariff on South Korean goods. He also extended tariffs on Mexican imports and increased duties on cars, steel, aluminum, and copper. Treasury Secretary Bessent remained optimistic about a potential U.S.-China deal. Commerce Department released a report showing consumer prices in the U.S. increased in line with economist estimates in the month of June. Semiconductor stocks was under substantial selling pressure, with the Philadelphia Semiconductor Index plunging by 3.1 percent after ending Wednesday's trading at its best closing level in a year. Qualcomm (QCOM) helped lead the sector lower, plummeting by 7.7 percent despite reporting better than expected fiscal third quarter earnings. Pharmaceutical stocks were considerably weak, dragging the NYSE Arca Pharmaceutical Index down by 2.9 percent to a two-month closing low. Healthcare, oil service and steel stocks too significant moved to the downside as the day progressed while notable strength remained visible among software and computer hardware stocks. Asia-Pacific stocks moved mostly lower. Hong Kong's Hang Seng Index plunged by 1.6 percent and China's Shanghai Composite Index slumped by 1.2 percent, although Japan's Nikkei 225 Index bucked the downtrend and jumped by 1 percent. The major European markets all moved to the downside on the day. While the French CAC 40 Index tumbled by 1.1 percent, the German DAX Index slid by 0.8 percent and the U.K.'s FTSE 100 Index edged down by 0.1 percent. In the bond market, treasuries showed a modest move back to the upside. Subsequently, the yield on the benchmark ten-year note, which moves opposite of its price, dipped 1.6 basis points to 4.36 percent.