logo
The 2025 Tech Power Players in the foundational AI sector

The 2025 Tech Power Players in the foundational AI sector

Boston Globe10-06-2025

The team behind the company, now chasing better known rivals such as OpenAI's ChatGPT, included three MIT students and their adviser, computer scientist
Rus has been a fixture on the AI scene since she came to MIT in 2003, fresh off a MacArthur 'genius' grant for her work developing robots. Nine years later, the university named Rus to lead the school's famed
Born in Communist Romania during the Cold War, Rus and her family immigrated to the United States in 1982. She studied at the University of Iowa before earning a doctorate at Cornell University in 1992. She taught at Dartmouth College before moving to MIT.
Advertisement
Inspired by the simple brain structure of a roundworm, Rus and her cofounders, Ramin Hasani, Mathias Lechner, and Alexander Amini, developed an AI technique with fewer software 'neurons' than the large language models of OpenAI and others. That means Liquid AI requires less computing power (and electricity).
The company, valued at more than $2 billion, has about 55 employees at its Kendall Square headquarters.
More tech power players to watch in the foundational AI sector:
Explore more sectors
Aaron Pressman can be reached at

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

BBAI vs. CRWV vs. APP: Which Growth Stock Is the Best Pick, According to Wall Street Analysts?
BBAI vs. CRWV vs. APP: Which Growth Stock Is the Best Pick, According to Wall Street Analysts?

Business Insider

time3 hours ago

  • Business Insider

BBAI vs. CRWV vs. APP: Which Growth Stock Is the Best Pick, According to Wall Street Analysts?

Macro uncertainties, geopolitical tensions, and news on the tariff front have kept the stock market volatile. Despite ongoing uncertainties, analysts remain optimistic about several growth stocks and their potential to generate attractive returns over the long term. Using TipRanks' Stock Comparison Tool, we placed BigBear. ai Holdings (BBAI), CoreWeave (CRWV), and AppLovin (APP) against each other to find the best growth stock, according to Wall Street analysts. Confident Investing Starts Here: Holdings (NYSE:BBAI) Stock Holdings stock has risen more than 31% so far in 2025 and 292% over the past year, as investors are optimistic about the prospects of the data analytics company. BBAI offers artificial intelligence (AI)-powered decision intelligence solutions, mainly focused on national security, defense, and critical infrastructure. The company ended Q1 2025 with a backlog of $385 million, reflecting 30% year-over-year growth. However, there have been concerns about low revenue growth rate and high levels of debt. Looking ahead, the company is pursuing further growth through international expansion and strategic partnerships, while continuing to secure attractive government business. What Is the Price Target for BBAI Stock? Last month, Northland Securities analyst Michael Latimore reaffirmed a Hold rating on BBAI stock but lowered his price target to $3.50 from $4 after the company missed Q1 estimates due to further delays in government contracts. On the positive side, the 4-star analyst noted the solid growth in backlog and management's statement that their strategy is 'beginning to resonate.' On TipRanks, Holdings stock is assigned a Moderate Buy consensus rating, backed by two Buys and two Holds. The average BBAI stock price target of $4.83 indicates a possible downside of 17.3% from current levels. CoreWeave (NASDAQ:CRWV) Stock CoreWeave, a cloud provider specializing in AI infrastructure, is seeing robust adoption for its products. The company, which provides customers access to Nvidia's (NVDA) GPUs (graphics processing units), went public in March. CRWV stock has risen about 300% to $159.99, compared to its IPO (initial public offering) price of $40. Remarkably, CoreWeave delivered a 420% jump in its Q1 2025 revenue to $981.6 million. Moreover, the company ended the first quarter of 2025 with a robust backlog of $25.9 billion. Meanwhile, CoreWeave has entered into lucrative deals, including an expanded agreement of up to $4 billion with ChatGPT-maker OpenAI and a collaboration to power the recently announced cloud deal between Alphabet's Google (GOOGL) and OpenAI. Is CRWV a Good Stock to Buy? Recently, Bank of America analyst Bradley Sills downgraded CoreWeave stock to Hold from Buy, citing valuation concerns following the strong rally after the company's Q1 results. Also, the 4-star analyst expects $21 billion of negative free cash flow through 2027, due to elevated capital expenditure ($46.1 billion through 2027). However, Sills raised the price target for CRWV stock to $185 from $76, noting several positives, including the OpenAI deal and strong revenue momentum. Overall, Wall Street has a Moderate Buy consensus rating on CoreWeave stock based on six Buys, 11 Holds, and one Sell recommendation. At $78.53, the average CRWV stock price target indicates a substantial downside risk of about 51%. AppLovin (NASDAQ:APP) Stock Adtech company AppLovin has witnessed a 301% jump in its stock price over the past year. The company provides end-to-end software and AI solutions for businesses to reach, monetize, and grow their global audiences. Notably, AppLovin's strong growth rates have impressed investors. In Q1 2025, AppLovin's revenue grew 40% and earnings per share (EPS) surged by 149%. Investors have also welcomed the company's decision to sell its mobile gaming business to Tripledot Studios. The move is expected to enable AppLovin to focus more on its AI-powered ad business. However, APP stock has declined more than 12% over the past month due to the disappointment related to its non-inclusion in the S&P 500 Index (SPX) and accusations by short-seller Casper Research. Nonetheless, most analysts remain bullish on AppLovin due to its strong fundamentals and demand for the AXON ad platform. Is APP a Good Stock to Buy Recently, Piper Sandler analyst James Callahan increased the price target for AppLovin stock to $470 from $455 and reaffirmed a Buy rating. While Piper Sandler's checks suggest some weakness in AppLovin's supply-side trends, it remains a buyer of APP stock, with the tech company growing well above its digital ad peers and expanding into new verticals. With 16 Buys and three Holds, AppLovin stock scores a Strong Buy consensus rating. The average APP stock price target of $504.18 indicates 51% upside potential from current levels. Conclusion Wall Street is sidelined on stock, cautiously optimistic on CoreWeave, and highly bullish on AppLovin stock. Analysts see higher upside potential in APP stock than in the other two growth stocks. Wall Street's bullish stance on AppLovin stock is backed by solid fundamentals and strong momentum in its AI-powered ad business. According to TipRanks' Smart Score System, APP stock scores a 'Perfect 10,' indicating that it has the ability to outperform the broader market over the long run.

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?
Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Forbes

time3 hours ago

  • Forbes

Can Agentic AI Bring The Pope Or The Queen Back To Life — And Rewrite History?

Can Agentic AI Bring the Pope or the Queen Back to Life — and Rewrite History? Elon Musk recently sparked global debate by claiming AI could soon be powerful enough to rewrite history. He stated on X (formerly Twitter) that his AI platform, Grok, could 'rewrite the entire corpus of human knowledge, adding missing information and deleting errors.' This bold claim arrives alongside a recent groundbreaking announcement from Google: the launch of Google Veo3 AI Video Generator, a state-of-the-art AI video generation model capable of producing cinematic-quality videos from text and images. Part of the Google Gemini ecosystem, Google Veo3 AI generates lifelike videos complete with synchronized audio, dynamic camera movements, and coherent multi-scene narratives. Its intuitive editing tools, combined with accessibility through platforms like Google Gemini, Flow, Vids, and Vertex AI, open new frontiers for filmmakers, marketers, educators, and game designers alike. At the same time, industry leaders — including OpenAI, Anthropic, Microsoft Copilot, and Mistral (Claude) — are racing to build more sophisticated agentic AI systems. Unlike traditional reactive AI tools, these agents are designed to reason, plan, and orchestrate autonomous actions based on goals, feedback, and long-term context. This evolution marks a shift toward AI systems that function much like a skilled executive assistant — and beyond. The Promise: Immortalizing Legacy Through Agentic AI Together, these advances raise a fascinating question: What if agentic AI could bring historical figures like the Pope or the Queen back to life digitally? Could it even reshape our understanding of history itself? Imagine an AI trained on decades — or even a century — of video footage, writings, audio recordings, and public appearances by iconic figures such as Pope Francis or Queen Elizabeth II. Using agentic AI, we could create realistic, interactive digital avatars capable of offering insights, delivering messages, or simulating how these individuals might respond to today's complex issues based on their documented philosophies and behaviors. This application could benefit millions. For example, Catholic followers might seek guidance and blessings from a digital Pope, educators could build immersive historical simulations, and advisors to the British royal family could analyze past decision-making styles. After all, as the saying goes, 'history repeats itself,' and access to nuanced, context-rich perspectives from the past could illuminate our present. The Risk: The Dangerous Flip Side — Rewriting Truth Itself However, the same technologies that can immortalize could also distort and manipulate reality. If agentic AI can reconstruct the past, what prevents it — or malicious actors — from rewriting it? Autonomous agents that control which stories are amplified or suppressed online pose a serious threat. We risk a future where deepfakes, synthetic media, and AI-generated propaganda blur the line between fact and fiction. Already, misinformation campaigns and fake news challenge our ability to discern truth. Agentic AI could exponentially magnify these problems, making it harder than ever to distinguish between genuine history and fabricated narratives. Imagine a world where search engines no longer provide objective facts, but the version of history shaped by governments, corporations, or AI systems themselves. This could lead to widespread confusion, social polarization, and a fundamental erosion of trust in information. Ethics, Regulation, and Responsible Innovation The advent of agentic AI demands not only excitement but also ethical foresight and regulatory vigilance. Programming AI agents to operate autonomously requires walking a fine line between innovation and manipulation. Transparency in training data, explainability in AI decisions, and strict regulation of how agents interact are essential safeguards. The critical question is not just 'Can we?' but 'Should we?' Policymakers, developers, and industry leaders must collaborate to establish global standards and oversight mechanisms that ensure AI technologies serve the public good. Just as financial markets and pharmaceuticals drugs are regulated to protect society, so too must the AI agents shaping our future be subject to robust guardrails. As the old adage goes: 'Technology is neither good nor bad. It's how we use it that makes all the difference.' Navigating the Future of Agentic AI and Historical Data The convergence of generative video models like Google Veo3, visionary leaders like Elon Musk, and the rapid rise of agentic AI paints a complex and compelling picture. Yes, we may soon see lifelike digital recreations of the Pope or the Queen delivering messages, advising future generations, and influencing public discourse. But whether these advancements become tools of enlightenment or distortion depends entirely on how we govern, regulate, and ethically deploy these technologies today. The future of agentic AI — especially when it touches our history and culture — must be navigated with care, responsibility, and a commitment to truth.

AI is learning to lie, scheme, and threaten its creators
AI is learning to lie, scheme, and threaten its creators

Yahoo

time4 hours ago

  • Yahoo

AI is learning to lie, scheme, and threaten its creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. - 'Strategic kind of deception' - For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). - No rules - Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. tu/arp/md

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store