
AI gamble must be smart, not just fast
The future of data sharing changed drastically when the US realised that 9/11 was a failure of intelligence agencies to act in concert on then-available data and hence called the incident a "data fusion" crisis. The US Department of Homeland Security began setting up a robust network of "fusion centres" – state and locally run organisations that allow real-time sharing of critical intelligence and datasets between two or more government units for identifying red flags.
Fast forward to 2025, and now Artificial Intelligence (AI) is taking over such "fusion centres" worldwide – with possibilities that are endless. AI agents are replacing humans, and language models are generating insights that were previously unheard of. However, as is the case with every technology, the use of AI, especially in the public sector and in legal matters, remains a double-edged sword and must be handled with a pinch of salt.
For instance, in June 2023, Schwartz, an attorney with Levidow, Levidow & Oberman in New York, used ChatGPT for legal case research and was fined by the judge for citing false precedents with bogus names in his brief. The large language model (LLM) was apparently hallucinating – a problem where these chatbots make up fictitious data on their own.
Similarly, in March 2024, the Microsoft-powered chatbot MyCity gave incorrect legal information that could have led prospective businessmen to break the law. It falsely claimed that landlords could openly discriminate based on the income of tenants and that restaurant owners could take a share of their workers' tips.
Hence, when it comes to using AI, public institutions are now faced with a tough choice: should they rely on public AI models hosted by third parties such as ChatGPT, adopt open-source models such as LLaMA, or train their own proprietary AI models in the long run? Choosing the right AI strategy is crucial here.
In 2024, Air Canada's virtual assistant was found to be giving factually incorrect information about discounts to a customer who then took the matter to court and was awarded damages.
Similarly, when Denmark rolled out AI algorithms in its social security system, the system was found to have an inherent bias against marginalised groups such as the elderly, low-income families, migrants, and foreigners. Ninety per cent of the cases that AI marked as fraud later turned out to be genuine, and the whole episode is now taught as a classic case study in discrimination and breach of the European Union's (EU) AI Act's regulations on social scoring systems.
Therefore, if any public sector organisation chooses to use a third-party model trained by OpenAI in its operations, there is a risk of bias against people of colour and disadvantaged groups – as the training data scraped from the internet, social media and discussion forums is usually biased itself.
A good AI strategy involves thoughtful and controlled phased deployments with well-planned use cases. For example, the Department of Homeland Security (DHS) began with publicly available AI tools to improve employee productivity but also rolled out its AI vision and development roadmap. In the meantime, it focused on developing specialised AI applications – such as one to train officers dealing with asylum applications and conducting security investigations.
By December 2024, DHS had launched DHSChat on its internal secure network – a cutting-edge algorithm that can draft reports, streamline tasks, develop software, and, unlike other large language models, ensures employee data is protected and not used to train external models. In fact, as a best practice and as mandated by the Trump administration's executive order, DHS actively maintains its AI inventory, which includes a list of use cases related to AI in its operations.
For countries like Pakistan, our institutions could use a mix of public, open-source and proprietary models – depending on the nature of the task at hand. When it comes to using AI as the new Google, public models are usually fine, but for drafting memos and summarising reports, it is not advisable to use a public model. For that, the Ministry of IT or other institutions can host their own open-source AI models in their data centres or fine-tune them to develop proprietary models.
For critical systems, it is always recommended not to entirely replace existing automation with AI. There is a need to install a supervisor for fact-checking and verifying the output of AI models for hallucinations and bias. No matter how lucrative the idea of an AI-driven public sector may be, it is important to thoroughly test and check the behaviour of these models before deploying them.
The AI-based transformation project currently being executed at the Federal Board of Revenue (FBR) will serve as a test case for other AI-aspiring public agencies.
The writer is a Cambridge graduate and is working as a strategy consultant
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
7 minutes ago
- Express Tribune
Is AI a scheming liar?
AI researchers still don't fully understand their own creations. Photo: File The world's most advanced AI models are exhibiting troubling new behaviours – lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models – AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behaviour," explained Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems. These models sometimes simulate "alignment" – appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organisation METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behaviour "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.


Express Tribune
10 hours ago
- Express Tribune
Mark Zuckerberg announces launch of Meta Superintelligence Labs
Meta chief executive Mark Zuckerberg has announced the launch of Meta Superintelligence Labs (MSL), a new AI unit intended to position the company at the forefront of artificial general intelligence development. The unit will bring together Meta's existing teams working on foundation models including the open-source Llama model and its Fundamental AI Research (FAIR) division. It will also launch a new lab focused on what Zuckerberg described as 'the next generation' of models. MSL will be led by Alexandr Wang, former chief executive of Scale AI, who joins Meta as chief AI officer. He will work alongside Nat Friedman, former GitHub CEO and a partner in the AI venture capital scene, who will oversee product and applied research efforts. The announcement, made via an internal memo obtained by CNBC, comes as Meta accelerates its recruitment drive amid intense competition with OpenAI, Google, and Microsoft for top AI talent. The company recently hired Wang and several colleagues as part of a $14.3 billion investment in AI infrastructure. It also recruited Friedman and Daniel Gross, both previously involved with Safe Superintelligence, the AI venture co-founded by OpenAI's Ilya Sutskever. In his memo, Zuckerberg said the emergence of superintelligence marked 'the beginning of a new era for humanity,' and that Meta was 'fully committed' to leading in its development. 'Meta is uniquely positioned to deliver superintelligence to the world,' he added, citing its scale, infrastructure, and experience in global product deployment. The new division will include high-profile hires from leading labs such as OpenAI, Google DeepMind, and Anthropic. Zuckerberg also highlighted Meta's roadmap for Llama 4.1 and 4.2, which are already integrated across Meta platforms and used by more than a billion people monthly. Alongside this, the company is initiating work on its next set of frontier models, with a 'small, talent-dense' team still in formation. The creation of MSL signals Meta's strategic intent to move beyond consumer-facing AI assistants and invest in foundational AI infrastructure. The announcement also reinforces Zuckerberg's vision of 'personal superintelligence for everyone'—a competitive stake in the rapidly evolving global AI landscape. Zuckerberg concluded his note by hinting at more talent announcements in the coming weeks, describing the effort as 'a new influx of talent and a parallel approach to model development.'


Business Recorder
a day ago
- Business Recorder
Siemens recruits artificial intelligence expert from Amazon
ZURICH: Siemens has recruited Amazon executive Vasi Philomin to its new position of head of data and artificial intelligence, the German technology company said on Monday. The move is the latest step by Siemens as it seeks to develop AI products and applications like its Industrial Copilot. Siemens has been aiming to accelerate its transition to a technology-focussed company, with AI seen as a key area along with industrial software. In 2023, Siemens unveiled a partnership with Microsoft to use artificial intelligence to increase productivity and human-machine collaboration in the manufacturing, transportation and healthcare industries. The project will create AI copilots to assist staff at customer companies as they design new products, and organize production and maintenance. Siemens to cut 8pc of jobs at struggling factory automation business Siemens said it was delighted to welcome Philomin, who had extensive experience in machine learning and industrial scale AI applications at Amazon. Philomin will report to Siemens' managing board member Peter Koerte, who is the company's chief technology officer and chief strategy officer. 'With his outstanding AI expertise and proven leadership in developing transformative technologies, he will make a decisive contribution to further expanding our data and AI capabilities,' said Koerte.