
Alibaba Cloud boosts APAC presence with new AI centre, data hubs
The announcement coincides with Alibaba Cloud's tenth anniversary in Singapore and the tenth year since it established its international headquarters in the city state. The company also revealed new upgrades to its cloud and AI technologies and released findings from a global study on green AI adoption.
Regional expansion
Alibaba Cloud has confirmed the opening of its third data centre in Malaysia and outlined plans to launch a second facility in the Philippines in the coming months. These additions follow recent infrastructure investments made in Thailand, Mexico, and South Korea earlier in the year.
The company said the investments aim to support the rising demand for secure and scalable cloud solutions as more industries increase AI adoption. The expanded network is intended to provide capacity for businesses, developers, and organisations to innovate and manage growth across new markets.
AI Global Competency Center
Alibaba Cloud has launched its AI Global Competency Center (AIGCC) in Singapore. The centre targets support for more than 5,000 businesses and 100,000 developers worldwide, providing access to AI models, advanced computing resources, and an AI innovation lab. The lab offers token credits, datasets, and personalised support designed around industry needs.
The AIGCC will engage over 1,000 companies and startups to co-develop AI solutions, and will introduce more than 10 AI agents for use in sectors such as finance, healthcare, logistics, manufacturing, retail, and energy. Alibaba Cloud has also committed to partnering with over 120 universities and institutions globally to train 100,000 AI professionals each year. Selina Yuan, President of International Business at Alibaba Cloud Intelligence, said, "Over the past decade, Singapore has been both an innovation center and a gateway to the region's digital economy. As we celebrate this important milestone, we reaffirm our commitment to empowering businesses of all sizes and verticals while advancing cutting-edge AI innovations and driving sustainable digital transformation in Singapore for years to come. Together with our partners and customers, we look forward to shaping Singapore's future as a global leader in AI and cloud innovation."
Technology developments
Among the new cloud products presented, Alibaba Cloud has released upgrades to its Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. The Data Transmission Service (DTS) now features "One Channel For AI", which manages both unstructured and structured data—ranging from documents to multimedia—into vector databases. This enables developers to create knowledge bases and Retrieval-Augmented Generation (RAG) applications more efficiently.
The Platform for AI (PAI) has improved its inference capabilities, including optimisations for complex model architectures such as Mixture of Experts. A new feature, Expert Parallel (EP), aims to increase throughput for large language models (LLMs) while conserving computational resources. The Model Weights Service now allows for faster startup and scaling times, demonstrated by tests showing cold starts accelerated by up to 91.4% on certain models.
Alibaba Cloud's ninth-generation Intel-based Enterprise Elastic Compute Service instances will be rolled out to new global markets, including Japan, South Korea, Thailand, Malaysia, the Philippines, United Arab Emirates, Germany, and the UK. This model, first launched in April, reportedly offers 20% better computing efficiency compared to prior versions, with performance improvements of up to 50% for specific workloads.
The company's sustainability platform, Energy Expert, has introduced an AI-driven ESG reporting solution built on Alibaba's own model, Qwen. This platform aims to streamline ESG report generation and compliance, providing automated content creation and structured guidance for organisations needing to align with international standards such as ISSB, GRI, and SASB.
Findings on green AI
Alibaba Cloud has published results from a global Forrester Consulting survey on green AI, in collaboration with NTU Global e-Sustainability CorpLab. The survey—of over 464 business and IT leaders—revealed 84% of those with sustainability strategies regard green AI as important, but 69% of organisations remain at an early stage of adoption.
Key barriers identified included a lack of sustainably sourced AI hardware materials and challenges in optimising data centre energy use. Significant skills and knowledge gaps were also reported, with 74% indicating uncertainty around defining green AI strategies and 76% lacking operational expertise in the field. The study recommends strategies like powering data centres with renewable energy, optimising models for edge computing, and enhancing regulatory collaboration.
Customer engagement
Several international clients were highlighted for their collaborations with Alibaba Cloud. These include GoTo Group, which migrated its business intelligence platform to Alibaba Cloud's MaxCompute solution, aiming for greater scalability and resilience. William Xiong, Group Chief Technology Officer of GoTo Group, said during the summit, "The migration to Alibaba Cloud's MaxCompute has enhanced the scalability and resilience of our data platform. By delivering cost efficiency, performance parity, and operational continuity, this collaboration strengthens the technical foundation for GoTo's ecosystem. This partnership positions us to drive innovation and deliver transformative solutions for millions of users across the ecosystem, while staying aligned with Indonesia's data sovereignty goals."
GoTo Financial also reported efficiency gains through Alibaba Cloud's database products, including PolarDB and Tair, which now support over 500 microservices with low latency.
Qwen, Alibaba's large language model family, continues to be deployed in numerous markets. VisionTech, based in Singapore, has integrated Qwen into its generative AI platform to support multilingual operations. The company reports a 25% reduction in infrastructure costs and improved response times as a result. "Our partnership with Alibaba Cloud allows us to deliver smarter, scalable, and enterprise-ready AI solutions while maintaining operational efficiency and customer satisfaction," said Lim Hui Jie, CEO of VisionTech. "Qwen's strong performance in handling multilingual conversational inputs and real-time translation gives us a distinct edge over other LLMs, enabling us to fast-track deployments and improve user engagement— whether it's English, Chinese, Malay, or Japanese. By dynamically switching languages in real-time, our AI bots create a seamless experience that resonates with users in various markets, ensuring that our solutions feel native and culturally aligned."
FLUX in Japan and Al-Futtaim in the Middle East have also joined partnerships with Alibaba Cloud, focusing on deploying Qwen-based solutions and expanding the reach of AI-powered services in their respective markets.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
02-07-2025
- Techday NZ
Alibaba Cloud boosts APAC presence with new AI centre, data hubs
Alibaba Cloud has announced the launch of new data centres in Malaysia and the Philippines and the establishment of its first AI Global Competency Center in Singapore as part of its continued expansion across Asia Pacific and other regions. The announcement coincides with Alibaba Cloud's tenth anniversary in Singapore and the tenth year since it established its international headquarters in the city state. The company also revealed new upgrades to its cloud and AI technologies and released findings from a global study on green AI adoption. Regional expansion Alibaba Cloud has confirmed the opening of its third data centre in Malaysia and outlined plans to launch a second facility in the Philippines in the coming months. These additions follow recent infrastructure investments made in Thailand, Mexico, and South Korea earlier in the year. The company said the investments aim to support the rising demand for secure and scalable cloud solutions as more industries increase AI adoption. The expanded network is intended to provide capacity for businesses, developers, and organisations to innovate and manage growth across new markets. AI Global Competency Center Alibaba Cloud has launched its AI Global Competency Center (AIGCC) in Singapore. The centre targets support for more than 5,000 businesses and 100,000 developers worldwide, providing access to AI models, advanced computing resources, and an AI innovation lab. The lab offers token credits, datasets, and personalised support designed around industry needs. The AIGCC will engage over 1,000 companies and startups to co-develop AI solutions, and will introduce more than 10 AI agents for use in sectors such as finance, healthcare, logistics, manufacturing, retail, and energy. Alibaba Cloud has also committed to partnering with over 120 universities and institutions globally to train 100,000 AI professionals each year. Selina Yuan, President of International Business at Alibaba Cloud Intelligence, said, "Over the past decade, Singapore has been both an innovation center and a gateway to the region's digital economy. As we celebrate this important milestone, we reaffirm our commitment to empowering businesses of all sizes and verticals while advancing cutting-edge AI innovations and driving sustainable digital transformation in Singapore for years to come. Together with our partners and customers, we look forward to shaping Singapore's future as a global leader in AI and cloud innovation." Technology developments Among the new cloud products presented, Alibaba Cloud has released upgrades to its Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings. The Data Transmission Service (DTS) now features "One Channel For AI", which manages both unstructured and structured data—ranging from documents to multimedia—into vector databases. This enables developers to create knowledge bases and Retrieval-Augmented Generation (RAG) applications more efficiently. The Platform for AI (PAI) has improved its inference capabilities, including optimisations for complex model architectures such as Mixture of Experts. A new feature, Expert Parallel (EP), aims to increase throughput for large language models (LLMs) while conserving computational resources. The Model Weights Service now allows for faster startup and scaling times, demonstrated by tests showing cold starts accelerated by up to 91.4% on certain models. Alibaba Cloud's ninth-generation Intel-based Enterprise Elastic Compute Service instances will be rolled out to new global markets, including Japan, South Korea, Thailand, Malaysia, the Philippines, United Arab Emirates, Germany, and the UK. This model, first launched in April, reportedly offers 20% better computing efficiency compared to prior versions, with performance improvements of up to 50% for specific workloads. The company's sustainability platform, Energy Expert, has introduced an AI-driven ESG reporting solution built on Alibaba's own model, Qwen. This platform aims to streamline ESG report generation and compliance, providing automated content creation and structured guidance for organisations needing to align with international standards such as ISSB, GRI, and SASB. Findings on green AI Alibaba Cloud has published results from a global Forrester Consulting survey on green AI, in collaboration with NTU Global e-Sustainability CorpLab. The survey—of over 464 business and IT leaders—revealed 84% of those with sustainability strategies regard green AI as important, but 69% of organisations remain at an early stage of adoption. Key barriers identified included a lack of sustainably sourced AI hardware materials and challenges in optimising data centre energy use. Significant skills and knowledge gaps were also reported, with 74% indicating uncertainty around defining green AI strategies and 76% lacking operational expertise in the field. The study recommends strategies like powering data centres with renewable energy, optimising models for edge computing, and enhancing regulatory collaboration. Customer engagement Several international clients were highlighted for their collaborations with Alibaba Cloud. These include GoTo Group, which migrated its business intelligence platform to Alibaba Cloud's MaxCompute solution, aiming for greater scalability and resilience. William Xiong, Group Chief Technology Officer of GoTo Group, said during the summit, "The migration to Alibaba Cloud's MaxCompute has enhanced the scalability and resilience of our data platform. By delivering cost efficiency, performance parity, and operational continuity, this collaboration strengthens the technical foundation for GoTo's ecosystem. This partnership positions us to drive innovation and deliver transformative solutions for millions of users across the ecosystem, while staying aligned with Indonesia's data sovereignty goals." GoTo Financial also reported efficiency gains through Alibaba Cloud's database products, including PolarDB and Tair, which now support over 500 microservices with low latency. Qwen, Alibaba's large language model family, continues to be deployed in numerous markets. VisionTech, based in Singapore, has integrated Qwen into its generative AI platform to support multilingual operations. The company reports a 25% reduction in infrastructure costs and improved response times as a result. "Our partnership with Alibaba Cloud allows us to deliver smarter, scalable, and enterprise-ready AI solutions while maintaining operational efficiency and customer satisfaction," said Lim Hui Jie, CEO of VisionTech. "Qwen's strong performance in handling multilingual conversational inputs and real-time translation gives us a distinct edge over other LLMs, enabling us to fast-track deployments and improve user engagement— whether it's English, Chinese, Malay, or Japanese. By dynamically switching languages in real-time, our AI bots create a seamless experience that resonates with users in various markets, ensuring that our solutions feel native and culturally aligned." FLUX in Japan and Al-Futtaim in the Middle East have also joined partnerships with Alibaba Cloud, focusing on deploying Qwen-based solutions and expanding the reach of AI-powered services in their respective markets.


Techday NZ
25-06-2025
- Techday NZ
Teradata launches on-premises AI Factory for secure private AI
Teradata has announced the launch of Teradata AI Factory, an integrated solution delivering the company's cloud-based artificial intelligence (AI) and machine learning (ML) capabilities to secure, on-premises environments. The AI Factory has been built in collaboration with NVIDIA and unifies key components including data pipelines, algorithm execution, and software infrastructure into a single, scalable system. The solution is intended to accelerate AI development—covering predictive, generative, and agentic AI—through private deployments while facilitating governance, compliance, and security for enterprises. Teradata AI Factory is designed to integrate software, hardware, and a combination of Teradata and third-party tools, aiming to decrease both compliance risks and costs. When paired with Teradata AI Microservices with NVIDIA and customer-provided NVIDIA GPUs, the platform supports accelerated development, including native Retrieval-Augmented Generation (RAG) pipelines, which are increasingly in demand among data-driven organisations. The company has positioned this solution as particularly relevant for industries with high regulatory requirements, such as healthcare, finance, and government, as well as any enterprise needing greater control and autonomy over AI strategy and deployments. Changing requirements According to the company, current global instability and stricter data sovereignty regulations are influencing organisations to seek more control over their AI infrastructure. These factors coincide with financial pressures that can result from both underused GPU investments and variable cloud computing costs, especially within hybrid enterprise environments. The increasing complexity of AI ecosystems is expected to further drive demand for integrated, turnkey solutions that can address both cost and governance issues. "Market dynamics are increasing buyer interest in on-premises solutions," said Teradata's Chief Product Officer, Sumeet Arora. "Teradata remains the clear leader in this environment, with proven foundations in what makes AI meaningful and trustworthy: Top-notch speed (performance), predictable cost (resource efficiency), and integration with the golden data record (which may already live on Teradata). Teradata AI Factory builds on these strengths in a single solution for organisations using on-prem infrastructure to gain control, meet sovereignty needs, and accelerate AI ROI." A recent report from Gartner states: "By 2028, more than 20% of enterprises will run AI workloads (training or inference) locally in their data centers, an increase from approximately 2% as of early 2025." ("How to Determine Infrastructure Requirements for On-Premises Generation AI" by Chandra Mukhyala, Jonathan Forest, Tony Harvey from March 5, 2025) Feature set Teradata AI Factory is structured to provide enterprises with a comprehensive on-premises AI solution incorporating security, cost efficiency, and seamless hardware-software integration. Its feature set includes Teradata's Enterprise Vector Store as well as Teradata AI Microservices, the latter of which leverages NVIDIA NeMo microservices to enable native RAG pipeline capabilities. The platform's architecture aims to address sensitive data requirements by keeping data within the organisation's boundaries, thereby reducing the risks commonly associated with public or shared AI platforms—including data exposure, intellectual property leakage, and challenges with regulatory compliance. Teradata AI Factory supports compliance with established standards such as GDPR and HIPAA, positioning it as an option for organisations where data residency and privacy are priorities. Its localised set-up is designed to facilitate high levels of AI performance while lowering latency and operational inefficiency due to reduced data movement. Customers can choose to deploy AI models on CPUs or accelerate performance using their existing GPU infrastructure. This approach seeks to avoid unpredictable cloud expenses, allowing organisations to maintain consistent operational costs and prepare for scaled private AI innovation going forward. Technical integration Teradata AI Factory presents an integrated, ready-to-run stack for AI applications. It includes: AI Platform for Rapid Innovation: Built on Teradata's IntelliFlex platform, the AI Factory incorporates Teradata Enterprise Vector Store, enabling integration of structured and unstructured data for generative AI applications. Built on Teradata's IntelliFlex platform, the AI Factory incorporates Teradata Enterprise Vector Store, enabling integration of structured and unstructured data for generative AI applications. Software Infrastructure: The AI Workbench provides a self-service workspace with access to analytics libraries, including those from ClearScape Analytics. It also offers model lifecycle management, compliance tools, one-click large language model (LLM) deployment, and supports JupyterHub, ModelOps, Airflow, Gitea, and Devpi. The AI Workbench provides a self-service workspace with access to analytics libraries, including those from ClearScape Analytics. It also offers model lifecycle management, compliance tools, one-click large language model (LLM) deployment, and supports JupyterHub, ModelOps, Airflow, Gitea, and Devpi. Algorithm Execution: The system supports scalable execution of predictive and generative algorithms, facilitating high performance through connections with customer GPUs and delivering native RAG processing. The system supports scalable execution of predictive and generative algorithms, facilitating high performance through connections with customer GPUs and delivering native RAG processing. Data Pipelines: The solution includes data ingestion tools and internal capabilities like QueryGrid, Open Table Format (OTF) compatibility, object store access, and support for NVIDIA utilities for complex data formats such as PDFs. By processing data locally within an organisation's infrastructure, Teradata AI Factory is intended to enhance data security and operational integrity, providing greater control and certainty for those adopting private AI strategies.


Techday NZ
17-06-2025
- Techday NZ
AI: The future belongs to those who put the humans in the machine first
In 1993, Ghost in the Machine imagined a future where consciousness could exist inside a computer. Three decades later, that vision has blurred into reality and machine intelligence is no longer a science fiction trope - it's a tool we use every day. But the real shift isn't just about building smarter systems; it's about building systems that support smarter humans. As generative AI spreads across legal practice, the advantage is no longer in what you know, but how well you reason because recall is easy - anyone can pull up case law. The real edge lies in interpretation, explanation and judgment. And while today's models don't always reason perfectly - neither do humans. The better question is: can AI help lawyers reason better? This is where things get interesting. More data ≠ better model Let's start with the false promise of infinite data. It's widely understood that throwing thousands of pages of legislation, regulation, case law and other legal documents at a model doesn't make it smarter. In fact, it often makes it worse because legal reasoning depends on, amongst other things, quality, relevance and clarity. A carefully curated dataset of law and precedent on an expertise domain in a particular jurisdiction (and potentially some related jurisdictions) can outperform a bloated corpus of global case law riddled with inconsistencies and irrelevance. Here, the model doesn't need to 'know the law' - it needs to retrieve it with precision and reason over the top with discipline. That's why in most practical applications in a specific domain of expertise, Retrieval-Augmented Generation (RAG) will probably beat full fine-tuning. RAG lets you plug into a general-purpose model that's already been trained on a vast body of knowledge, and then layer on your own curated legal content in real-time - without the need for full re-training. It's fast, flexible and keeps you close to the constantly evolving edge of legal precedent. If fine-tuning is like rewriting the engine, RAG is like swapping in smarter fuel - giving you a model that reasons over your trusted material instead of guessing based on a noisy global corpus. This is the difference between dumping legal textbooks on your desk and actually having a partner walk you through the implications. Reasoning over regurgitation Take a real-world query: "Can an employee working remotely in Melbourne still claim a travel allowance under their enterprise agreement?" An untrained model might respond with this: "There are hundreds of examples of travel allowances in Australian enterprise agreements…shall I find these for you and list them?" Helpful? Not really. A well-trained legal AI might say this instead: "It depends on the specific terms of the enterprise agreement that applies to the employee. Travel allowances are typically tied to physical attendance at a designated worksite and if an employee's role has been formally varied to remote or hybrid including under a flexible work arrangement, the allowance may no longer apply. You'd need to check whether the agreement defines a primary work location, whether remote work was agreed under (Section 65 of the Fair Work Act or otherwise) and whether there are any clauses preserving travel entitlements in such cases." Now we're not 'just' talking about answers; we're talking about prompts for strategic thinking. Scaling senior expertise, insight and judgment, not just recall The much deeper question is this: how do we train AI not just to answer; but to remind us to ask better questions? Clients don't pay us for information; they pay for interpretation and come to top-tier firms because they want the kind of insight only senior legal professionals can provide - the kind that draws on pattern recognition through lots of relevant experience, strategic insight and framing and an understanding of nuance built across decades of practice. The real opportunity lies in scaling what clients actually value most: the expertise of senior partners - including their insight, experience, judgment and contextual thinking. This means training AI to reason like a partner - to recognise what matters, frame choices, reason through trade-offs and flag what clients will care about We should be asking "How do we encode that?" How do we teach a model to say not just 'here's what the law says', but 'here's how you might think about this and here's what clients like yours have cared about in similar cases'. This represents an all important shift from knowledge to judgment and from retrieval to reasoning. Because the goal isn't to build a machine that knows everything but to build one that helps your lawyers engage with better questions, surface richer perspectives and unlock more strategic conversations that create value for clients. It's important to remember: AI hears what is said, but great lawyers listen for what isn't said. That's where real context lives - within tone, hesitation and the unspoken concerns that shape top-tier legal advice. To build AI that supports nuanced thinking, we need to train it on more than documents; we need to model real-world interactions and teach it to recognise the emotional cues that matter. This isn't about replacing human intelligence but about amplifying it, helping lawyers read between the lines and respond with sharper insight. This, in turn, might open up brand new use cases. Imagine if AI could listen in on client-lawyer conversations not just for note-taking but to proactively suggest risks, flag potential misunderstandings or surface relevant precedents in real time based on the emotional and contextual cues it detects. From knowledge to insight: What great training looks like If we want to AI to perform like a partner, we need the model not to give lawyers the answer but to do what a senior partner would do in conversation: "Here's what you need to think about... Here are two approaches clients tend to prefer... and here's a risk your peers might not spot." This kind of reasoning-first response can help younger lawyers engage with both the material and the client without needing to escalate every issue to their senior. Importantly, it's not about skipping the partner - it's about scaling their thinking. Scaling the apprenticeship model in ways not possible in the past. If you're not solving for: What the client really cares about, and why How to recognise the invisible threads between past matters, and current situations, options and decisions, How to ask the kinds of questions a senior prcatitioner would ask The kind of prompt to use to achieve this …then you're not training AI…you're just hoping like hell that it helps. This is also where RAG and training intersect. Rather than re-training the model from scratch, we can use RAG to ensure the model is drawing from the right content - legal guidance, judgment notes, contextual memos - while training it to reason the way our top partners do. Think of it less like coding a robot; and more like mentoring a junior lawyer with access to every precedent you've ever relied on. Some critics, including recent research, have questioned whether today's large language models can truly reason or reliably execute complex logical tasks. It's a fair challenge and one we acknowledge but it's also worth noting that ineffective reasoning isn't new. Inconsistency, bias and faulty heuristics have long been a part of human decision-making. The aim of legal AI isn't to introduce flawless reasoning, but to scale the kind of strategic thought partners already apply every day and to prompt richer thinking, not shortcut it. How to structure a real firm-level AI rollout As AI becomes embedded in professional services, casual experimentation is no longer enough. Legal firms need structured adoption strategies and one of the best frameworks could be what Wharton professor Ethan Mollick calls the 'Lab, Library, and Leadership' model for making AI work in complex organisations. In his breakdown: Lab = the experimental sandbox where teams pilot real-world use cases with feedback loops and measurable impact. Library = the curated knowledge base of prompts, best practices, guardrails and insights (not just raw documents, but how to use these well). Leadership = the top-down cultural shift that's needed to legitimise, resource and scale these efforts. For law firms, this maps elegantly to our current pressing challenges: the Lab is where legal teams experiment with tools like RAG based models on live matters. The Library is the evolving playbook of prompt templates, safe document sources and past legal reasoning. And Leadership (arguably the most vital) is what determines whether those ideas ever leave the lab and reach real matters and clients. As Mollick puts it, "AI does not currently replace people, but it does change what people with AI are capable of." The firms that win in this next chapter won't just use AI - they'll teach their people how to build with it. And critically, they'll keep teaching it. Most models, including GPT-4, are built on datasets with a cut-off and as a consequence they are often months or even years out of date. If you're not feeding the machine fresh experiences and insights, you're working with a version of reality that's already stale. This isn't a 'one and done' deployment - it's an ongoing dialogue and by structuring feedback loops from live matters, debriefs and partner insights, firms can ensure the model evolves alongside the business, not behind it. Putting humans in the machine Ultimately, legal AI isn't about machine innovation; it's about human innovation and the real challenge is how to capture and scale the experience, insight, judgment and strategic thinking of senior lawyers. That requires sitting down with partners to map how they approach a question, what trade-offs they consider and how they advise clients through complexity. That's the real creativity and that's what we need to encode into the machine. Lawyer 2.0 isn't just AI-assisted - it's trained by the best, for the benefit of the many. The future of legal work will belong to those who put humans in the machine first.