logo
Alibaba launches Qwen3, open-source AI for global developers

Alibaba launches Qwen3, open-source AI for global developers

Techday NZ05-05-2025
Alibaba has introduced Qwen3, the latest open-sourced large language model series generation.
The Qwen3 series includes six dense models and two Mixture-of-Experts (MoE) models, which aim to offer developers flexibility to build advanced applications across mobile devices, smart glasses, autonomous vehicles, and robotics.
All models in the Qwen3 family—spanning dense models with 0.6 billion to 32 billion parameters and MoE models with 30 billion (3 billion active) and 235 billion (22 billion active) parameters—are now open-sourced and accessible globally.
Qwen3 is Alibaba's first release of hybrid reasoning models. These models blend conventional large language model capabilities with more advanced and dynamic reasoning. Qwen3 can transition between "thinking mode" for complex multi-step tasks such as mathematics, coding, and logical deduction, and "non-thinking mode" for rapid, more general-purpose responses.
For developers using the Qwen3 API, the model provides control over the duration of its "thinking mode," which can extend up to 38,000 tokens. This is intended to enable a tailored balance between intelligence and computational efficiency. The Qwen3-235B-A22B MoE model is designed to lower deployment costs compared to other models in its class.
Qwen3 has been trained on a dataset comprising 36 trillion tokens, double the size of the dataset used to train its predecessor, Qwen2.5. Alibaba reports that this expanded training has improved reasoning, instruction following, tool use, and multilingual tasks.
Among Qwen3's features is support for 119 languages and dialects. The model is said to deliver high performance in translation and multilingual instruction-following.
Advanced agent integration is supported with native compatibility for the Model Context Protocol (MCP) and robust function-calling capabilities. These features place Qwen3 among open-source models targeting complex agent-based tasks.
Regarding benchmarking, Alibaba states that Qwen3 surpasses previous Qwen models—including QwQ in thinking mode and Qwen2.5 in non-thinking mode—on mathematics, coding, and logical reasoning tests.
The model also aims to provide more natural experiences in creative writing, role-playing, and multi-turn dialogue, supporting more engaging conversations.
Alibaba reports strong performance by Qwen3 models across several benchmarks, including AIME25 for mathematical reasoning, LiveCodeBench for coding proficiency, BFCL for tools and function-calling, and Arena-Hard for instruction-tuned large language models. The development of Qwen3's hybrid reasoning capacity involved a four-stage training process: long chain-of-thought cold start, reasoning-based reinforcement learning, thinking mode fusion, and general reinforcement learning.
Qwen3 models are now freely available on digital platforms including Hugging Face, Github, and ModelScope. An API is scheduled for release via Alibaba's Model Studio, the company's development platform for AI models. Qwen3 is also integrated into Alibaba's AI super assistant application, Quark.
The Qwen model family has attracted over 300 million downloads globally. Developers have produced over 100,000 derivative models based on Qwen on Hugging Face, which Alibaba claims ranks the series among the most widely adopted open-source AI models worldwide.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sinch launches Model Context Protocol to drive AI messaging
Sinch launches Model Context Protocol to drive AI messaging

Techday NZ

time5 days ago

  • Techday NZ

Sinch launches Model Context Protocol to drive AI messaging

Sinch has launched its implementation of the Model Context Protocol (MCP), allowing artificial intelligence agents to initiate compliant, real-time telecommunications activities across messaging, voice, email, and verification channels via standardised interfaces. The MCP is an emerging protocol intended to standardise how AI agents interact with various systems and services. Sinch's deployment of the protocol is designed to give AI agents the ability to carry out communications tasks directly through its platform. These tasks range from orchestrating marketing campaigns to client notifications, identity verification processes, and customer service handling. AI-driven communications According to Sinch, MCP is engineered to manage AI-scale communication volumes, suitable for tasks demanding rapid, automated interaction rather than the slower cadence typically associated with human-initiated communications. The implementation supports integration with AI tools, including OpenAI SDK, Claude, and Microsoft's Azure AI, and is delivered with compliance and security protocols incorporated as standard. The company states that MCP helps support a broad transition away from traditional brand-centric applications to direct communication channels between enterprises and their customers. Sinch currently manages over 900 billion customer interactions each year for 175,000 businesses in more than 60 countries, providing messaging, voice, email, and verification services, and drawing upon its local compliance and routing expertise. Global scale and expertise Sinch customers have already begun to report outcomes claimed to result from the shift towards AI-assisted engagement. For example, a global insurer has been able to autonomously process 80% of customer enquiries across 125 languages, while a retail client achieved tripled engagement by integrating conversational AI with Rich Communication Services (RCS). The company issued data from its State of Customer Communications Report suggesting that 95% of businesses are currently using or planning to utilise AI in customer communications. Research from IDC projects that the global AI platforms market will reach USD $153.0 billion by 2028. MCP implementation details Through its new MCP server, now available in developer preview with Claude, Sinch is providing a mechanism for AI agents to understand the requirements of different communication actions. The server allows agents to determine which channel should be used, how messages should be formatted for different jurisdictions, which regulatory rules apply, and how to ensure successful delivery. Sinch notes that these capabilities are accessible via a range of tools, including development environments like Cursor and frameworks such as OpenAI Agents SDK, as well as platforms like AgenticFlow and Microsoft Azure AI Foundry. "AI is transforming how businesses communicate, and Sinch has the proven infrastructure to make it work at scale," said Robert Gerstmann, Chief Evangelist and Co-Founder at Sinch. "With MCP, we're codifying decades of communications expertise into protocols that AI agents can understand, teaching them the specific requirements, compliance rules, and best practices needed for each use case and region. What matters most happens behind the scenes; guaranteeing delivery, maintaining quality, navigating compliance, and preventing fraud. We've spent decades perfecting these operational fundamentals that make AI-powered communications actually work." Strategic partnerships The MCP protocol is part of Sinch's broader strategic approach to AI communications. Alongside established integrations with OpenAI and Anthropic, Sinch also provides routing systems and conversational AI functionality, intending to offer enterprises a comprehensive platform for deploying AI-assisted communication strategies. Sinch's partnerships span a variety of major technology companies. It is an Adobe Platinum Partner and has links with Salesforce Agentforce and Microsoft Dynamics Customer Insights, which the company reports strengthens its position within the enterprise AI communications landscape. "At Sinch we are pioneering the way the world communicates, and our MCP implementation represents the next evolution of that mission," said Laurinda Pang, CEO of Sinch. "Through the expansion of native AI capabilities and partnerships, we're equipping organizations with unprecedented capabilities to connect with customers anywhere, anytime, through any channel. We envision a world where every business, regardless of size or technical sophistication, can harness the power of intelligent communications to keep their customers engaged, informed, safe, and happy."

GitLab Duo Agent Platform beta unlocks AI-human collaboration
GitLab Duo Agent Platform beta unlocks AI-human collaboration

Techday NZ

time5 days ago

  • Techday NZ

GitLab Duo Agent Platform beta unlocks AI-human collaboration

GitLab has opened public beta access to its GitLab Duo Agent Platform, a DevSecOps orchestration platform enabling asynchronous collaboration between developers and AI agents. Product details The GitLab Duo Agent Platform introduces an orchestration layer designed to allow specialised AI agents and human developers to collaborate within software development projects. By leveraging GitLab as the system of record, the platform delivers broad project context to AI agents, supporting informed decision-making in line with organisational standards. The company has made the public beta available to Premium and Ultimate customers. The initial set of features includes Software Development Flow - the first orchestrated multi-agent workflow that accumulates context, clarifies ambiguities with developers, and implements changes to codebases and repositories using project structures, codebase history, and supplementary context such as GitLab issues and merge requests. Specialised agents and workflows Specialised agents on the platform mirror established team roles, with capabilities to search, read, create, and modify existing artefacts across GitLab. The platform also features agent Flows, which are structured, predetermined workflows that can coordinate multiple specialised agents to autonomously execute complex or multi-step tasks. GitLab is planning an AI Catalogueueueueue in the future - this marketplace will allow organisations to create, customise, and share agents and agent flows among their teams and the wider GitLab ecosystem. Interface and support Users of the public beta have access to GitLab Duo Agentic Chat within development environments, both in IDEs and the GitLab Web UI. According to GitLab, the chat experience has been transformed into an active development partner, supporting iterative feedback and chat history, as well as streamlined delegation using new slash commands such as /explain, /tests, and /include. These commands create a quick delegation language, and the /include feature allows for context injection from specific files, issues, merge requests, or dependencies. Developers can also personalise agent behaviour using custom rules, specifying guidance tailored to individual or team preferences through natural language instructions. In addition to integration with Visual Studio Code, support has been extended to JetBrains IDEs such as IntelliJ, PyCharm, GoLand, and WebStorm. The platform also introduces Model Context Protocol (MCP) Client Support, which enables GitLab Duo Agentic Chat to connect to remote and local MCP servers. This allows agents to communicate with systems beyond GitLab, provided those systems are accessible via MCP, expanding the practical application of the platform's capabilities. Future releases GitLab stated that the scope and quality of the Duo Agent Platform will be expanded through subsequent 18.x releases, with a general availability target by the end of the year. Industry perspectives GitLab's own leadership and industry observers offered perspectives on the platform's beta release. "GitLab Duo Agent Platform enhances our development workflow with AI that truly understands our codebase and our organisation," said Bal Kang, Engineering Platform Lead at NatWest. "Having GitLab Duo AI agents embedded in our system of record for code, tests, CI/CD, and the entire software development lifecycle boosts productivity, velocity, and efficiency. The agents have become true collaborators to our teams, and their ability to understand intent, break down problems, and take action frees our developers to tackle the exciting, innovative work they love." Rachel Stephens, Research Director at RedMonk, commented, "As software development workflows grow in complexity and organisations look to leverage AI, there's an increasing need for platforms that can integrate AI capabilities without adding to existing disjointed toolchains." "As a DevSecOps platform, GitLab is already positioned to help developers collaborate both synchronously and asynchronously. Now the GitLab Duo Agent Platform intends to take this a step further, helping developers also integrate AI agents into their workflows." Bill Staples, Chief Executive Officer at GitLab, added, "Today marks a pivotal moment in software development as we introduce the public beta of the GitLab Duo Agent Platform, the first DevSecOps orchestration platform designed to unlock asynchronous collaboration between developers and AI agents." "GitLab Duo Agent Platform isn't just another AI tool; it's a fundamental reimagining of software development from isolated, linear processes into dynamic, intelligent collaboration." "By leveraging GitLab's unique position as the system of record for the entire software development lifecycle, we're providing AI agents with unprecedented context and capabilities. This enables our customers to work with AI agents that have comprehensive context about their codebase, their workflows, and their organisational goals to help boost productivity, velocity, and efficiency."

Zoho unveils Zia LLM & agent marketplace with focus on privacy
Zoho unveils Zia LLM & agent marketplace with focus on privacy

Techday NZ

time6 days ago

  • Techday NZ

Zoho unveils Zia LLM & agent marketplace with focus on privacy

Zoho has introduced its proprietary large language model, Zia LLM, alongside new AI infrastructure investments designed to strengthen its business-focused artificial intelligence offerings and data privacy protections. The suite of new tools includes the Zia LLM, over 25 prebuilt AI-powered agents available in an Agent Marketplace, the no-code Zia Agent Studio for custom agent building, and a Model Context Protocol (MCP) server, providing third-party agents access to Zoho's extensive library of actions. These advancements are intended for both developers and end users, aiming to bring operational and financial efficiencies across a wide range of business needs and use cases. Focus on technology and privacy "Today's announcement emphasizes Zoho's longstanding aim to build foundational technology focused on protection of customer data, breadth and depth of capabilities, and value," said Mani Vembu, Chief Executive Officer at Zoho. "Because Zoho's AI initiatives are developed internally, we are able to provide customers with cutting-edge tool sets without compromising data privacy and organizational flexibility, democratizing the latest technology on a global scale." The Zia LLM was developed entirely in-house using NVIDIA's AI accelerated computing platform and has been trained for Zoho product-specific use cases such as structured data extraction, summarisation, retrieval-augmented generation (RAG), and code generation. The model is comprised of three parameter sizes - 1.3 billion, 2.6 billion, and 7 billion - each optimised for different business contexts and benchmarked against similar open-source models. Zoho says these models allow the platform to optimise performance according to user needs, balancing computational power with efficiency, and plans to continue evolving its right-sizing approach to AI model deployment. The Zia LLM will be rolled out across data centres in the United States, India, and Europe, initially being used to support internal Zoho applications and expected to become available for customer deployment in the coming months. Expansion in language and speech technology Alongside its language model, Zoho is launching proprietary Automatic Speech Recognition (ASR) models capable of performing speech-to-text conversion in both English and Hindi. These models operate with low computational requirements without a reduction in accuracy and, according to Zoho, can deliver up to 75% better performance than comparable models in standard benchmark tests. Additional language support is expected to follow, particularly for languages predominantly spoken in Europe and India. While many large language model integrations are supported on the Zoho platform, including ChatGPT, Llama, and DeepSeek, the company emphasises that Zia LLM enables customers to maintain their data on Zoho's servers, thus retaining control over privacy and security. Agentic AI technology To promote adoption of agentic AI, Zoho has made available a range of AI agents embedded within its core products. These agents are tailored to support various common organisational functions such as sales, customer service, and account management. The newly updated Ask Zia conversational assistant now features business intelligence skills suitable for data engineers, analysts, and data scientists, allowing them to build data pipelines, create analytical reports, and initiate machine learning processes within an interactive environment. A new Customer Service Agent has also been launched, capable of processing and contextualising customer requests, providing direct responses, or escalating queries to human staff as needed. Zia Agent Studio and Marketplace The Zia Agent Studio offers a fully prompt-based, optional low-code environment for building and deploying AI agents, giving users access to more than 700 predefined actions spanning the Zoho app ecosystem. Agents can be set for autonomous operation, triggered by user action, or integrated into customer communications. When deployed, these agents function as digital employees, adhering to existing organisational access permissions and allowing administrators to audit behaviour, performance, and impact. Zoho's Agent Marketplace, now part of its existing marketplace offering over 2,500 extensions, allows for rapid deployment of prebuilt and third-party AI agents. A selection of prebuilt agents is available, including: Revenue Growth Specialist, identifying opportunities for customer upsell and cross-sell Deal Analyser, which provides insights such as win probability and proposed follow-up actions for sales teams Candidate Screener, ranking job applicants based on skills and suitability Zoho has committed to regularly adding more prebuilt agents to address broader business needs and enabling ecosystem partners, independent software vendors, and developers to build and host additional agents. MCP interoperability and future roadmap The deployment of the Model Context Protocol server will permit any MCP client to access data and actions from a growing collection of Zoho applications within the customer's existing permission framework. Using Zoho Flow, certain third-party tools can also be accessed, and Zoho Analytics now includes support for a local MCP server. Expanded application support is planned throughout the year. Looking forward, Zoho intends to scale the Zia LLM's parameter sizes and extend speech-to-text capabilities to more languages. Plans also include the future release of a reasoning language model, further enhancements to Ask Zia's skills for finance and support teams, and the addition of protocol support for agent intercommunication within and beyond Zoho's platform. Zoho remains focused on balancing the ability to provide practical AI support for business needs with privacy requirements, stating that its models are not trained on consumer data and that no customer information is retained. The company reiterates its privacy pledge to customers, with complete oversight of data held in its own operated data centres.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store