logo
Facebook parent Meta divides its AI teams, chief product officer explains change in internal memo: 'Our new structure aims to...'

Facebook parent Meta divides its AI teams, chief product officer explains change in internal memo: 'Our new structure aims to...'

Time of India28-05-2025
Meta has reportedly reorganising its artificial intelligence (AI) teams to expedite the development and deployment of new products and features. This internal restructuring, announced in a memo by chief product officer Chris Cox, aims to enhance the company's competitiveness in the evolving AI space, where it faces significant rivalry from entities like ChatGPt-maker OpenAI, Google and Microsoft.
Tired of too many ads? go ad free now
According to a report by Axios, Cox said in an internal memo on Tuesday (May 27) detailed the new organizational framework. As per the memo, there will be two distinct units: an
AI Products team
and an AGI Foundations unit. While Connor Hayes will lead the AI Products team, the
AGI Foundations unit
will be co-led by Ahmad Al-Dahle and Amir Frenkel.
"Our new structure aims to give each org more ownership while minimizing (but making explicit) team dependencies," Cox stated.
This also marks a similar effort to a previous AI team reshuffle conducted by Meta in 2023, which also aimed to expedite development.
How new AI teams at Meta will work
The AI Products team at Meta will be responsible for the
Meta AI assistant
, Meta's AI Studio, and the integration of AI features across core Meta platforms including Facebook, Instagram and WhatsApp.
The AGI Foundations unit's will work on a range of underlying technologies, including the company's Llama models, alongside initiatives to enhance AI capabilities in reasoning, multimedia and voice.
Meta's AI research unit, known as FAIR (Fundamental AI Research), will reportedly maintain its independent status outside this new structure. However, a specific team within FAIR focusing on multimedia will transition to the new AGI Foundations team.
Company executives confirmed to the publication that no departures or job cuts are associated with these changes. Some leaders from other divisions of Meta have been integrated into the new AI structure.
This call will steal your money: "Family Scam" working & how to protect yourself!
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Talking to ChatGPT? Think twice: Sam Altman says OpenAI has no legal rights to protect ‘sensitive' personal info
Talking to ChatGPT? Think twice: Sam Altman says OpenAI has no legal rights to protect ‘sensitive' personal info

Mint

time2 hours ago

  • Mint

Talking to ChatGPT? Think twice: Sam Altman says OpenAI has no legal rights to protect ‘sensitive' personal info

During an interaction with Podcaster Theo Von, OpenAI CEO Sam Altman spoke about confidentiality related to ChatGPT. According to Altman, many people, especially youngsters, talk to ChatGPT about very personal issues, like a therapist or life coach. They ask for help with relationships and life choices. However, that can be tricky. 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality,' Altman says. However, right now, no such legal privacy exists for ChatGPT. If there's a court case, OpenAI might have to share 'your most sensitive' chats. Nevertheless, Altman feels this is wrong. He believes conversations with AI should have the same privacy as talks with a therapist. A year ago, no one thought about this. Now, it's a big legal question. 'We should have the same concept of privacy for your conversations with AI that we do with a therapist,' he says. 'No one had to think about that even a year ago,' the OpenAI CEO adds. Von then says he feels unsure about using AI because he worries about who might see his personal information. He thinks things are moving too fast without proper checks. Sam Altman agrees. He believes the privacy issue needs urgent attention. Lawmakers also agree, but it's all very new and laws haven't caught up yet, he said. Von doesn't 'talk to' ChatGPT much himself because there's no legal clarity about privacy. 'I think it makes sense,' Altman replies. ChatGPT as a therapist There are numerous cases reported about people using ChatGPT as their therapist. A recent incident involves Aparna Devyal, a YouTuber from Jammu & Kashmir. The social media Influencer got emotional after missing a flight. It came from years of feeling 'worthless'. She spoke to ChatGPT about being called 'nalayak' at school and struggling with dyslexia. ChatGPT comforted her, saying she kept going despite everything. Aparna felt seen. According to the AI chatbot, Aparna is not a fool, just human. Forgetting things under stress is normal, the AI assistant said. ChatGPT praised her strength in asking for help and said people like her kept the world grounded. 'I'm proud of you,' ChatGPT said.

"Most Empathetic Voice": Neurodivergent People Find New Support In AI Tools For Social Navigation
"Most Empathetic Voice": Neurodivergent People Find New Support In AI Tools For Social Navigation

NDTV

time2 hours ago

  • NDTV

"Most Empathetic Voice": Neurodivergent People Find New Support In AI Tools For Social Navigation

For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. "I've never understood how people [decipher] social cues," the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. "I know it's a machine," she says. "But sometimes, honestly, it's the most empathetic voice in my life." Neurodivergent people - including those with autism, ADHD, dyslexia and other conditions - can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks - in particular some worry about over-reliance - many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. "I don't do myself justice," she says, noting that ChatGPT has "allowed me to do a lot more with my brain." With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism - and he received the same diagnosis himself - that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. "Wow ... that's a unique shirt," he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. "The emotional baggage that comes along with those situations would just disappear within minutes," he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be "very seductive," says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable - as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review. "If AI starts screwing up things and getting things wrong," Suzuki says, "people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. "The objective [of AI models like ChatGPT] is to satisfy the user," he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. "While AI can improve efficiency," the researchers wrote, "it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI." While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. "A lot of us just end up kind of retreating from society," warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. "As somebody who's struggled with a disability my whole life," she says, "I need this." (Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Meta to halt political advertising in EU from October, blames EU rules
Meta to halt political advertising in EU from October, blames EU rules

Time of India

time3 hours ago

  • Time of India

Meta to halt political advertising in EU from October, blames EU rules

Meta Platforms will end political, electoral, social issue advertising on its platform in the European Union in early October because of the legal uncertainties due to EU rules targeting political advertising, the U.S. social media company said on Friday. Meta's announcement echoed Alphabet unit Google's decision announced last November, underscoring Big Tech's pushback against EU rules aimed at reining in their power and making sure that they are more accountable and transparent. The European Union legislation, called the Transparency and Targeting of Political Advertising (TTPA) regulation and which will apply from Oct. 10, was triggered by concerns about disinformation and foreign interference in elections across the 27-country bloc. The EU law requires Big Tech companies to clearly label political advertising on their platforms, who paid for it and how much as well as which elections are being targeted or risk fines up to 6% of their annual turnover. "From early October 2025, we will no longer allow political, electoral and social issue ads on our platforms in the EU," Meta said in a blog post. "This is a difficult decision - one we've taken in response to the EU's incoming Transparency and Targeting of Political Advertising (TTPA) regulation, which introduces significant operational challenges and legal uncertainties," it said. Meta said TTPA obligations create what it said is an untenable level of complexity and legal uncertainty for advertisers and platforms operating in the EU. It said the EU rules will ultimately hurt Europeans. "We believe that personalised ads are critical to a wide range of advertisers, including those engaged on campaigns to inform voters about important social issues that shape public discourse," Meta said. "Regulations, like the TTPA, significantly undermine our ability to offer these services, not only impacting effectiveness of advertisers' outreach but also the ability of voters to access comprehensive information," the company added.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store