Latest news with #Llama2


News18
6 days ago
- Business
- News18
Why Mark Zuckerberg Spent Rs 14 Billion To Get Alexander Wang To Meta
Last Updated: Meta relied on open-source to attract developers, but now seeks a visionary leader to shape its AI future—prompting the $14B bet on Alexander Wang to lead the charge Mark Zuckerberg is reportedly under pressure as Meta struggles to keep pace in the rapidly advancing world of artificial intelligence. In a bold move to change course, Meta has made a massive investment aimed at strengthening its AI capabilities. The tech giant has reportedly poured $14 billion into Scale AI, a leading data-labelling startup, effectively doubling the company's valuation to $29 billion. The deal is said to give Meta a significant 49% stake in Scale AI—along with a strategic edge in the AI race. Despite the substantial investment, Scale AI remains an independent entity with no changes to its board. Nevertheless, Meta now wields considerable influence over the company's operations. Alexander Wang, Scale AI's founder and CEO, plays a pivotal role in this arrangement. Although Wang retains his position on Scale's board, his partnership with Meta means the tech giant effectively steers Scale AI's decisions. The deal was substantial enough to create the impression that Meta had acquired Scale AI entirely. In reality, a significant portion of the deal benefited Scale AI's employees, who received substantial payouts for their shares while retaining some equity. This arrangement, reportedly Alexander Wang's idea, ensured that his team could profit from the company's growth. Why Is Meta Interested In Scale AI's Business? Meta's interest in Scale AI is particularly noteworthy, given that the latter's primary business involves data labelling for machine learning, a service with minimal technological innovation. Scale AI caters to clients such as Toyota, General Motors, Etsy, and various governments, providing data preparation services for those keen on adopting AI but lacking the in-house capability to develop it. This investment in Scale AI does not align with Meta's core business interests, as Meta is not looking to become a B2B data service company. The primary objective of the deal was to bring Alexander Wang into Meta's fold, a strategy similar to Google's investment in Character AI and Microsoft's acquisition of talent through Inflection AI. The Race To Build The Best LLM In today's AI-driven world, the company that builds the best Large Language Model (LLM) will dominate. It's a battle for market leadership, where knowing how to build models isn't enough. Without the right data, massive computing power, and the ability to scale, survival is unlikely. Meta is currently trailing in the AI race. OpenAI has dominated the consumer space with ChatGPT, while Google and Anthropic hold strong positions in the developer ecosystem. Although Meta has released models like Llama 2, it has yet to secure the top spot in the LLM race. Meta's core strategy so far has focused on open-sourcing its models, which helped attract developers and researchers to its ecosystem. However, the company now believes that open-source alone isn't enough. What it needs is a visionary leader to steer its AI future—and that's where Wang comes in. He is seen as the ideal choice to take Meta's AI ambitions to the next level. First Published: July 01, 2025, 18:55 IST


Arabian Post
16-06-2025
- Business
- Arabian Post
EdgeCortix's SAKURA‑II Elevates Raspberry Pi 5 with On‑Device Generative AI
EdgeCortix has launched its SAKURA‑II M.2 AI accelerator for the Raspberry Pi 5 and other Arm‑based platforms, enabling high‑performance, energy‑efficient execution of generative AI at the device edge. With 60 TOPS and 30 TFLOPS performance within an 8–10 W power envelope, the SAKURA‑II module supports advanced models including Llama 2, Stable Diffusion, Vision Transformers and VLMs on compact, affordable hardware. Dr Sakyasingha Dasgupta, EdgeCortix founder and CEO, highlighted that the integration 'opens the door for innovators and enterprises around the world to build smarter, faster, and more efficient edge AI‑driven devices'. This remark underscores a clear strategic pivot: migrating AI workloads away from cloud dependence and embedding them directly into low‑power devices. Venture partner Sailesh Chittipeddi echoed this view, emphasising the appeal for IoT and edge application engineers seeking scalability without datacentre overhead. The core of SAKURA‑II is EdgeCortix's DNA architecture, offering high memory bandwidth—up to 68 GB/s—and support for dual-channel LPDDR4x. This combination optimises batch‑1 inferencing for real‑time AI tasks while maintaining minimal latency and maximised compute utilisation. ADVERTISEMENT Market response has been mixed. It's FOSS notes the roughly US $349 price tag for the M.2 module, with no explicit mention of shipping costs, urging buyers to clarify before purchase. A TechPowerUp forum debate revealed cost‑sensitive hobbyists comparing it to a US $130 AI HAT offering about 26 TOPS. One user characterised SAKURA‑II as 'on another completely different ballpark' due to its RAM and bandwidth advantages for advanced applications. For industrial users, particularly those operating in space‑weight‑power‑cost constrained environments such as drones, robotics, smart agriculture or security, SAKURA‑II's offline capabilities are pivotal. By enabling autonomous AI without cloud reliance, organisations can enhance resilience and reduce latency in mission‑critical operations. Academic research on efficient edge deployment reinforces this evolution. A paper from June 10, 2025 demonstrated quantised YOLOv4‑Tiny object detection on Raspberry Pi 5, achieving 28.2 ms inference per image at 13.85 W power consumption. While this study used CPU‑based INT8 quantisation, the performance and consumption metrics set a baseline that illustrates SAKURA‑II's potential leap in efficiency and speed via dedicated silicon acceleration. EdgeCortix's positioning also aligns with wider trends in AI hardware development. Their DNA technology enables dynamic reconfiguration and mixed‑precision processing approximating FP32 accuracy—important for generative AI workloads that balance performance with model fidelity. Partners like SoftBank and Renesas have emphasised the importance of this co‑design approach, blending hardware IP with compiler‑driven software stacks to reduce TCO and accelerate time‑to‑market. Industry analysts see SAKURA‑II and similar accelerators as closing the gap between cloud‑scale AI and embedded edge use cases. By supporting multi‑billion‑parameter models on hand‑held devices, they suggest a future where even small autonomous systems can perform complex tasks like content generation, language parsing and computer vision locally—without connectivity or latency constraints. However, barriers remain. The ~$349 entry price may deter hobbyists and small‑scale developers, contrasted with cheaper model‑specific HAT solutions. Adoption may hinge on use case value—where the benefits of on‑device Generative AI outweigh acquisition and integration costs. Enterprise rollouts will need to consider software support, model compatibility, and real‑world inference benchmarks – details which are pending independent testing. EdgeCortix provides MERA, its compiler and runtime platform, enabling developers to deploy models across heterogeneous Edge AI systems, signalling strong software ecosystem support. This software‑hardware synergy contrasts with many accelerators that must rely on limited driver support or manual optimisation. The extension to Raspberry Pi 5 is significant. As one of the most accessible single‑board computers, Pi 5 offers a global developer base and extensive community support. Pairing it with SAKURA‑II could catalyse novel applications—from mobile robotics and decentralised AI devices to educational platforms that illustrate advanced AI concepts. Going forward, key indicators to watch will include independent benchmark results, broader platform support, and commercial deployments in agriculture, defence, and industrial automation. The ROI calculation will depend on whether the performance and efficiency gains translate into measurable gains—lower energy costs, reduced latency, or enhanced autonomy.
Yahoo
03-06-2025
- Business
- Yahoo
United Airlines CEO: ‘We're probably doing more AI than anyone'
This story was originally published on CIO Dive. To receive daily news and insights, subscribe to our free daily CIO Dive newsletter. United Airlines is 'probably doing more AI than anyone' as investments in the technology continue, CEO Scott Kirby said during an investor conference last week. A lot of the airline's efforts are still in the experimental phase, he said. The company is using AI to share flight details with customers and to update labor contracts. In the latter use case, Kirby said AI is more accurate and faster than humans. Baggage recovery is another AI pursuit. Not every use case is successful. For predictive maintenance, AI 'hasn't worked as well as we thought,' Kirby said. Despite the challenges, the company is still experimenting in this area and has had a few isolated cases that were fruitful. Enterprises are full of potential AI pursuits, spanning departments and roles. Some use cases are more impactful than others. Most businesses struggle to identify which ideas to launch at scale. Around 7 in 10 decision-makers have more potential AI opportunities than they can possibly fund, according to a Snowflake report. Early AI adopters have found it challenging to lean on metrics like cost and business impact when deciding what project to prioritize. CIOs who can help organizations avoid dead-end AI use cases are an asset, according to analysts. The alternative could bring consequences. Decision-makers worry about job security and their company's market position if they advocate for the wrong use case, Snowflake found. Technology leaders can't make decisions about AI adoption in a silo. Sorin Hilgen, chief digital officer and in-country CIO at convenience retailer EG America, told CIO Dive that deciding which use cases to tackle is a collaborative effort among business leaders, who take into account timelines and resource availability. Goldman Sachs takes a similar approach. 'We started with an enormous number of [AI] use cases, and we whittled it down to the use cases that we want to spend money on,' COO and President John Waldron said during an investor conference last week. Enterprises can't chase every lead. The share of companies abandoning most of their AI initiatives bumped up to 42% this year, compared to 17% last year, according to analysis from S&P Global Market Intelligence. Analysts have urged CIOs not to interpret every failed AI experiment as a negative signal, however. Promoting a culture of experimentation and encouraging trial-and-error can lead to better results and more engagement, experts say. Recommended Reading Meta unleashes AI free-for-all with release of Llama 2


WIRED
22-05-2025
- Business
- WIRED
DOGE Used Meta AI Model to Review Emails From Federal Workers
May 22, 2025 12:57 PM DOGE tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email. Photo Illustration:Elon Musk's so-called Department of Government Efficiency (DOGE) used artificial intelligence from Meta's Llama model to comb through and analyze emails from federal workers. Materials viewed by WIRED show that DOGE affiliates within the Office of Personnel Management (OPM) tested and used Meta's Llama 2 model to review and classify responses from federal workers to the infamous 'Fork in the Road' email that was sent across the government in late January. The email offered deferred resignation to anyone opposed to changes the Trump administration was making to its federal workforce, including an enforced return to office policy, downsizing, and a requirement to be 'loyal.' To leave their position, recipients merely needed to reply with the word 'resign.' This email closely mirrored one that Musk sent to Twitter employees shortly after he took over the company in 2022. Records show Llama was deployed to sort through email responses from federal workers to determine how many accepted the offer. The model appears to have run locally, according to materials viewed by WIRED, meaning it's unlikely to have sent data over the internet. Meta and OPM did not respond to requests for comment from WIRED. Meta CEO Mark Zuckerberg appeared alongside other Silicon Valley tech leaders like Musk and Amazon founder Jeff Bezos at Trump's inauguration in January, but little has been publicly known about his company's tech being used in government. Because of Llama's open-source nature, the tool can easily be used by the government to support Musk's goals without the company's explicit consent. Soon after Trump took office in January, DOGE operatives burrowed into OPM, an independent agency that essentially serves as human resources for the entire federal government. The new administration's first big goal for the agency was to create a government-wide email service, according to current and former OPM employees. Riccardo Biasini, a former Tesla engineer, was involved in building the infrastructure for the service that would send out the original 'Fork in the Road' email, according to material viewed by WIRED and reviewed by two government tech workers. In late February, weeks after the Fork email, OPM sent out another request to all government workers and asked them to submit five bullet points outlining what they accomplished each week. These emails threw a number of agencies into chaos, with workers unsure how to manage email responses that had to be mindful of security clearances and sensitive information. (Adding to the confusion, it has been reported that some workers who turned on read receipts say they found that the responses weren't actually being opened.) In February, NBC News reported that these emails were expected to go into an AI system for analysis. While the materials seen by WIRED do not explicitly show DOGE affiliates analyzing these weekly 'five points' emails with Meta's Llama models, the way they did with the Fork emails, it wouldn't be difficult for them to do so, two federal workers tell WIRED. 'We don't know for sure,' says one federal worker on whether DOGE used Meta's Llama to review the 'five points' emails. 'Though if they were smart they'd reuse their code.' DOGE did not appear to use Musk's own AI model, Grok, when it set out to build the government-wide email system in the first few weeks of the Trump administration. At the time, Grok was a proprietary model belonging to xAI and access to its API was limited. But earlier this week, Microsoft announced that it would begin hosting xAi's Grok 3 models as options in its Azure AI Foundry, making the xAI models more accessible in Microsoft environments like the one used at OPM. This potentially, should they want it, would enable Grok as an option as an AI system going forward. In February, Palantir struck a deal to include Grok as an AI option in the company's software, which is frequently used in government. Over the last few months, DOGE has rolled out and used a variety of AI-based tools at government agencies. In March, WIRED reported that the US Army was using a tool called 'CamoGPT' to remove DEI-related language from training materials. The General Services Administration rolled out 'GSAi' earlier this year, a chatbot aimed at boosting overall agency productivity. OPM has also accessed software called AutoRIF that could assist in the mass firing of federal workers.


Time of India
09-05-2025
- Business
- Time of India
Meta appoints former Google DeepMind director Robert Fergus as head of AI Research lab
Representative image Facebook parent Meta has informed its staff that the company has appointed former Google DeepMind director Robert Fergus to lead its artificial intelligence research lab. According to a report by Bloomberg, Fergus will head the Fundamental AI Research (FAIR) lab at Meta. Fergus co-founded the Facebook AI Research lab (FAIR) along with Yann LeCun in 2014. Operation Sindoor IPL 2025 suspended as India-Pakistan tensions escalate Pakistan appeals for loans citing 'heavy losses', later says X account hacked Can Pakistan afford a war with India? Here's a reality check The unit takes care of AI research at the company. The FAIR lab creates models of advanced robotics, generate audio and further push boundaries of AI capabilities. As per Fergus LinkedIn profile, he was associated with Google DeepMind as a research director for five years. Before joining Google, Fergus was working with Meta as a research scientist. As per the report, Chief Product Officer Chris Cox informed the Meta employees that Fergus has joined FAIR labs and has succeeded Joelle Pineau , who announced departure plans last month. 'We're working towards building human-level experiences that transform the way we interact with technology and are dedicated to leading and advancing AI research,' Fergus said in a post on LinkedIn. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like New Container Houses Vietnam (Take A Look At The Prices) Container House | Search Ads Search Now Undo According to a report from Fortune, FAIR led research on the company's early AI models, including Llama 1 and Llama 2. However, the report states that many researchers have departed FAIR for other startups, companies, and even Meta's newer GenAI group, which spearheaded the development of Llama 4. AI Masterclass for Students. Upskill Young Ones Today!– Join Now