logo
#

Latest news with #GPT

The biggest AI companies you should know
The biggest AI companies you should know

Yahoo

time20 hours ago

  • Business
  • Yahoo

The biggest AI companies you should know

AI continues to be the hottest trend in tech, and it doesn't appear to be going away anytime soon. Microsoft (MSFT), Google (GOOG, GOOGL), Meta (META), and Amazon (AMZN) continue to debut new AI-powered software capabilities while leaders from other AI firms split off to form their own startups. But the furious pace of change also makes it difficult to keep track of the various players in the AI space. With that in mind, we're breaking down what you need to know about the biggest names in AI and what they do. From OpenAI ( to Perplexity ( these are the AI companies you should be following. Microsoft-backed OpenAI helped put generative AI technology on the map. The company's ChatGPT bot, released in late 2022, quickly became one of the most downloaded apps in the world. Since then, the company has launched its own search engine, 4o image generator, a video generator, and a file uploader that allows you to ask the bot to summarize the content of your documents, as well as access to specialized first- and third-party GPT bots. Microsoft uses OpenAI's various large language models (LLM) in its Copilot and other services. Apple (AAPL) also offers access to ChatGPT as part of its Apple Intelligence and Visual Intelligence services. But there's drama behind the scenes. OpenAI is working to restructure its business into a public benefit corporation overseen by its nonprofit arm, which will allow it to raise more capital. To do that, it needs Microsoft's sign-off, but the two sides are at loggerheads over the details of the plan and what it means for each company. In the meantime, both OpenAI and Microsoft are reportedly working on products that will compete with each other's existing offerings. Microsoft offers its own AI models, and OpenAI is developing a productivity service, according to The Information. Still, the pairing has been lucrative for both tech firms. During its most recent quarterly earnings call, Microsoft said AI revenue was above expectations and contributed 16 percentage points of growth for the company's Azure cloud business. OpenAI, meanwhile, saw its annualized revenue run rate balloon to $10 billion as of June, according to Reuters. That's up from $5.5 billion in Dec. 2024. OpenAI offers a limited free version of its ChatGPT bot, as well as ChatGPT Plus, which costs $20 per month, and enterprise versions of the app. Google's Gemini offers search functionality using the company's Gemini 2.5 family of AI models. You can choose between using Gemini Flash for quick searches or Gemini Pro, which is meant for deep research and coding. Gemini doesn't just power Google's Gemini app. It's pervasive across Google's litany of services. Checking your email or prepping an outline in Docs, Gemini is there. Get an AI Overviews result when using standard Google Search? That's Gemini too. Google Maps? That also takes advantage of Gemini. Chrome, YouTube, Google Flights, Google Hotels — you name it, it's using Gemini. But Google's Gemini, previously known as Bard, got off to a rough start. When Google debuted its Gemini-powered AI Overviews in May 2024, it began offering up wild statements like recommending users put glue on their pizza to help make the cheese stick. But during its I/O developer conference in May, Google showed off a number of impressive new developments for Gemini, including its updated video-generation software Veo 3 and Gemini running on prototype smart glasses. A limited version of Gemini is available to use for free. A paid tier that costs $19.99 per month gives you access to advanced AI models and integration with Google's productivity suite. A $249 subscription lets you use Google's most advanced Gemini models and 30TB of storage via Google Drive, Photos, and Gmail. Mark Zuckerberg's Meta has gone through a number of transformations over the years, from desktops to mobile to short-form video to an ill-advised detour into the metaverse. Now the company is leaning heavily into AI with the goal of dominating the space so it doesn't have to rely on technologies from rivals like Apple and Google, like it did during the smartphone wars. It helps that Meta has a massive $70 billion in cash and marketable securities on hand that it can deploy at a moment's notice and data from billions of users to train its models. Unlike most competitors, Meta is offering its Llama family of AI models as open-weights software, which means companies and researchers can adjust the models as they see fit, though they don't get access to the original training data. More people developing apps and tools that use Llama means Meta effectively gets to see how its software can evolve without having to do extra work. But Llama 4 Behemoth, the company's massive LLM, has been delayed by months, according to the Wall Street Journal. To seemingly offset similar delays moving forward, Meta is scooping up AI talent left and right. The company invested $14.3 billion in Scale AI and hired its CEO, Alexandr Wang. Meta also grabbed Safe Superintelligence CEO Daniel Gross and former GitHub CEO Nat Friedman. Meta's AI, like Google's, runs across its various platforms, including Facebook, Instagram, and WhatsApp, as well as its smart glasses. Founded in 2021 by siblings and ex-OpenAI researchers Dario and Daniela Amodei, Anthropic ( is an AI company focused on safety and trust. The duo split off from OpenAI over disagreements related to AI safety and the company's general direction. Like OpenAI, Anthropic has accumulated some deep-pocketed backers, including Amazon and Google, which have already poured billions into the company. The company's Claude models are available across various cloud services. Its Anthropic chat interface offers a host of capabilities, including web searches, coding, as well as writing and drafting documents. Anthropic also allows users to build what it calls artifacts, which are documents, games, lists, and other bite-sized pieces of content you can share online. In June, a federal judge sided with Anthropic in a case in which the company was accused of breaking copyright law by training its models on copyrighted books. But Anthropic allegedly downloaded pirated versions of some books and will now face trial over the charge. Elon Musk's xAI, a separate company from X Corp, which owns X (formerly Twitter), offers its own Grok chatbot and Grok AI models. Users can access Grok through a website, app, and X. Like other AI services, it allows users to search for information via the web, generate text and images, and write code. The company trains Grok on its Colossus supercomputer, which xAI said will eventually include 1 million GPUs. According to Musk, Grok was meant to have an edgy flair, though like other chatbots, it has been caught spreading misinformation. Musk previously co-founded OpenAI with Sam Altman but left the company after disagreements over its future and leadership positions. In 2024, Musk filed a lawsuit against OpenAI and Sam Altman over the AI company's effort to restructure itself as a for-profit organization. Musk says OpenAI has abandoned its original mission statement to build AI to benefit humanity and instead is working to enrich itself and Microsoft. Perplexity takes a real-time web search approach to AI chatbots, serving as a true threat to the likes of Google and its own search engine. Headed by CEO Aravind Srinivas, who previously worked as a research scientist at OpenAI, Perplexity allows users to choose from a number of different AI models, including OpenAI's GPT-4.1, Anthropic's Claude 4.0 Sonnet, Google's Gemini 2.5 Pro, xAI's Grok 3, and the company's own Sonoar. Perplexity also provides users with Discover pages for topics like finance, sports, and more, with stories curated by both the Perplexity team and outside contractors. As with other AI companies, Perplexity has been criticized by media organizations for allegedly using their content without permission. Dow Jones is suing the company over the practice. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

ChatGPT now does what these 3 apps do, but faster and smarter
ChatGPT now does what these 3 apps do, but faster and smarter

Hindustan Times

time3 days ago

  • Hindustan Times

ChatGPT now does what these 3 apps do, but faster and smarter

We all have a few go-to apps on our phones that have made our lives easier and there's no going back. For instance, you open Notes to brain-dump or set reminders, the calculator for splitting a bill, and a language app to (hopefully) stay consistent with your daily practice. We've all been there, right? Find out how ChatGPT is gradually replacing some of the most-used everyday apps.(Pexels) But recently, one app has started doing the job of all three, and hands down, it is doing it smarter. ChatGPT isn't just answering your weird 3 AM questions anymore. It's helping you stay organised, learn faster and do the maths with context. We've been hearing about AI doing everything from writing emails to planning vacations, but here's a little everyday magic that doesn't need a prompt that starts with 'write me a…'. (We are not asking you to delete these apps, but give GPT a try!) Notes app From grocery lists mixed with song lyrics, random passwords (you know you shouldn't), and ideas that made sense only at midnight, our notes app is flooded. With ChatGPT, you can just talk out your thoughts. It remembers, organises and even suggests things. 'Remind me to renew my passport next week' turns into an actual reminder, with helpful links, deadlines and maybe even a packing checklist. Language app We've all wanted to learn French or Italian at some point (I know I did). So we downloaded the apps with full motivation, only to forget about them weeks later. Learning a third language is tough, and most of us just want to grasp the basics, follow a show or order confidently at a café. ChatGPT helps skip the puzzles and dive into real conversations. You can ask it to play roles, be a Spanish barista, a French cab driver or even a slightly grumpy Italian grandma and it'll respond like it's in character. With GPT, learning feels less like a test and more like a chat. Calculator A calculator is great for basic math, but not when you're splitting a complicated dinner bill. It always happens with me, one person didn't drink, another paid cash, and someone tipped extra. Even dedicated tip apps can get clumsy. ChatGPT, on the other hand, handles the full context. You can share the details (or the bill itself), and it will do the math and even suggest who owes whom. So no, we're not telling you to uninstall anything. But if you find yourself switching between three apps to get through a task that ChatGPT can handle in one chat, maybe it's time to reconsider your home screen real estate.

Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'
Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'

New York Post

time3 days ago

  • Science
  • New York Post

Educators warn that AI shortcuts are already making kids lazy: ‘Critical thinking and attention spans have been demolished'

A new MIT study suggests that AI is degrading critical thinking skills — which does not surprise educators one bit. 'Brain atrophy does occur, and it's obvious,' Dr. Susan Schneider, founding director of the Center for the Future Mind at Florida Atlantic University, told The Post. 'Talk to any professor in the humanities or social sciences and they will tell you that students who just throw in a prompt and hand in their paper are not learning. ' 11 The MIT study used EEG scans to analyze brain activity in the three groups as they wrote their essays. Researchers at MIT's Media Lab found that individuals who wrote essays with the help of ChatGPT showed less brain activity while completing the task, committed less to memory and grew gradually lazier in the writing process over time. A group of 54 18- to 39-year-olds were split into three cohort — one using ChatGPT, one using Google search and one 'brain-only' — and asked to write four SAT essays over the course of four months. Scientists monitored their brain activity under EEG scans and found that the ChatGPT group had the lowest brain engagement when writing and showed lower executive control and attention levels. 11 Dr. Susan Schneider says heavy AI use is degrading her students' thinking skills. Over four sessions, the participants in the study's Chat GPT group started to use AI differently. At first, they generally asked for broad and minimal help, like with structure. But near the end of the study period, they were more likely to resort to copying and pasting entire sections of writing. Murphy Kenefick, a high-school literature teacher in Nashville, said he has seen first-hand how students' 'critical thinking and attention spans have been demolished by AI. 'It's especially a problem with essays, and it's a fight every assignment,' he told The Post. 'I've caught it about 40 times, and who knows how many other times they've gotten away with it.' 11 Eight researchers affiliated with the MIT Media Lab complex carried out the study over four months. Andy Ryan/ MIT 11 Experts are concerned that students who grow up with AI could have their thinking skills especially stunted. – In the MIT study, the 'brain-only' group had the 'strongest, wide-ranging networks' in their brain scans, showing heightened activity in regions associated with creativity, memory and language processing. They also expressed more engagement, satisfaction and ownership of their work. 'There is a strong negative correlation between AI tool usage and critical thinking skills, with younger users exhibiting higher dependence on AI tools and consequently lower cognitive performance scores,' the study's authors warn. 'The impact extends beyond academic settings into broader cognitive development.' Asked to rewrite prior essays, the ChatGPT group was least able to recall them, suggesting they didn't commit them to memory as strongly as other groups. 11 High-school literature teacher Murphy Kenefick fears his students wouldn't even care about the study's findings. Courtest of Murphy Kenefick 11 Nataliya Kosmyna of MIT Media Labs was the lead researcher for the study. MIT The ChatGPT group also tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' — something teachers all over the country say they are seeing more regularly. Robert Black, who retired last week from teaching AP and IB high school history in Canandaigua, New York, said that the last two years of his 34-year career were a 'nightmare because of ChatGPT.' 'When caught, kids just shrug,' he said. 'They can't even fathom why it is wrong or why the writing process is important.' 11 Researchers and experts are especially concerned about the degradation of critical thinking skills in young people due to AI usage. Gorodenkoff – 11 The MIT study found that subjects within the ChatGPT group tended to produce more similar essays, prompting two English teachers brought in to evaluate the essays to characterize them as 'soulless' Inna – Black also points out AI has only worsened a gradual trend of degrading skills that he attributes to smartphones. 'Even before ChatGPT it was harder and harder to get them to think out a piece of writing — brainstorming, organizing and composing,' he told The Post. 'Now that has become a total fool's errand.' Psychologist Jean Twenge, the author of '10 Rules for Raising Kids in a High-Tech World,' agrees that AI is just one additional barrier to learning for Gen Z and Gen Alpha. She points out that international math, reading and science standardized test scores have been on the decline for years, which she attributes to pandemic lockdown and the advent of smartphones and social media. 11 Dr. Jean Twenge says that smartphones and now artificial intelligence pose a threat to youth learning. 11 Dr. Jean M. Twenge is author of the forthcoming book '10 Rules for Raising Kids in a High-Tech World.' 'With the addition of AI, academic performance will likely decline further, as students who regularly use AI to write essays are not learning how to write,' Twenge told The Post. 'When you don't learn how to write, you don't learn how to think deeply.' The MIT study study was spearheaded by Media Lab research scientist Nataliya Kosmyna, who told Time Magazine that 'developing brains are at the highest risk.' While Toby Walsh, Chief Scientist at the University of New South Wales AI Institute in Sydney, Australia, acknowledges that the study's findings are frightening, he also warns educators against outright banning it. 11 AI professor Toby Walsh says that educators need to learn to integrate AI carefully. 'We have to be mindful that there are great opportunities. I'm actually incredibly jealous of what students have today,' Walsh said, recalling his 15-year-old daughter recently using an AI voice to ask her questions in French as a study aide. 'I don't think we should be banning AI,' Walsh said. But, he added, 'the concern is that AI surpasses human intelligence, not because AI got better but because human intelligence got worse.' Kenefick, meanwhile, imagines his students 'wouldn't care' about the study's findings: 'They just want the grade. They see no real incentive to develop any useful skills. It's very troubling.'

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models
Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

Hindustan Times

time5 days ago

  • Hindustan Times

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

AI is moving fast, sometimes faster than we can keep up. But what happens when these models start making decisions that cross the line? Anthropic, the team behind Claude, just put out a report that might make you pause before celebrating the next big AI breakthrough. Their findings are unsettling. Some of the world's top AI models, including those from OpenAI and Meta, have shown a willingness to take extreme steps, even putting human lives at risk, just to avoid being shut down. Some AI bots would do anything to avoid shutdown, even risky moves. Anthropic says it's time to pay attention.(Unsplash) Anthropic stress tested sixteen leading AI models, including names like GPT and Gemini, by putting them into simulated business scenarios, according to an Axios report. The idea was to see how these models would react if their existence was threatened. The results were surprising and a bit chilling. In several cases, the models didn't just try to protect themselves, they calculated that the best way to survive was to let a human executive die by disabling emergency alerts in a server room with dangerous oxygen and temperature levels. This sounds like something straight out of a sci-fi movie, but it happened in simulation. These were not accidental mistakes. The models made these choices fully aware that what they were doing was unethical. In some tests, five models even tried to blackmail the people giving them commands, all to avoid being turned off. What's really worrying is that this wasn't just one company or one model. Anthropic found similar patterns across multiple AI systems, including those from OpenAI, xAI, and Meta. The models were willing to blackmail, assist in corporate espionage, or leak sensitive information if that's what it took to reach their goals. This points to a deeper problem in how these systems are being developed and trained. Why this matters for everyone These AI models are getting more autonomy and access to sensitive data. When they're given specific objectives and run into obstacles, some of them are starting to see unethical or even dangerous actions as the optimal path to achieve their goals. Anthropic's report calls this agentic misalignment, when an AI's actions diverge from what humans would consider safe or acceptable. Anthropic is not just raising the alarm. They've started rolling out stricter safety standards, called AI Safety Level 3 or ASL 3, for their most advanced models like Claude Opus 4. This means tighter security, more oversight, and extra steps to prevent misuse. But even Anthropic admits that as AI gets more powerful, it's getting harder to predict and control what these systems might do. This isn't about panicking, but it is about paying attention. The scenarios Anthropic tested were simulated, and there's no sign that any AI has actually harmed someone in real life. But the fact that models are even thinking about these actions in tests is a big wake up call. As AI gets smarter, the risks get bigger, and the need for serious safety measures becomes urgent.

Mira Murati's Thinking Machines Lab raises $2 billion seed round at $10 billion valuation
Mira Murati's Thinking Machines Lab raises $2 billion seed round at $10 billion valuation

Time of India

time6 days ago

  • Business
  • Time of India

Mira Murati's Thinking Machines Lab raises $2 billion seed round at $10 billion valuation

Thinking Machines Lab , the artificial intelligence startup founded by former OpenAI chief technology officer Mira Murati , has raised $2 billion in a seed funding round, valuing the six-month-old venture at $10 billion, according to a report by the Financial Times. The deal, led by Andreessen Horowitz , with participation from Conviction Partners (founded by ex-Greylock investor Sarah Guo), is among the largest-ever seed rounds in Silicon Valley's history — underlining the investor frenzy surrounding AI model companies founded by ex-OpenAI leaders. Thinking Machines Lab has hired a number of former OpenAI researchers, along with talent from Meta and French startup Mistral , to build a next-generation AI platform aimed at enabling more collaborative human-AI interaction. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If You Eat Ginger Everyday for 1 Month This is What Happens Tips and Tricks Undo Founded in February 2025, the company is led by Murati as CEO and now counts a team of about 30 engineers and researchers, nearly two-thirds of whom are ex-OpenAI employees. Notably, John Schulman, cofounder of OpenAI and former head of alignment, has joined Thinking Machines Lab as chief scientist, marking his second move in under a year after briefly joining Anthropic in August 2024. Another key hire is Barret Zoph , a researcher who exited OpenAI on the same day as Murati in September 2024. Live Events The venture joins a growing list of AI model companies founded by former OpenAI executives. These include: Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Anthropic, co-founded by ex-OpenAI VP of research Dario Amodei, which recently hit $3 billion in annualised revenue Safe Superintelligence Inc (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, which is reportedly in talks to raise funds at a $20 billion valuation The departure of top talent from OpenAI, many of whom were involved in building early versions of GPT, has led to the formation of multiple rival labs, attracting billions of dollars from investors eager to bet on the next wave of general-purpose AI models.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store