
‘Karachi Slush'D 2025': ‘AI should be embedded in everyday operations'
The event, 'Karachi Slush'D 2025' was organized by Katalyst Labs at the auditorium of NASTP on Saturday. Jahan Ara, CEO of Katalyst Labs, said the organization is a Startup Accelerator and Innovation Hub committed to helping startups scale, developing future leaders, and enabling corporations to advance their innovation strategies.
During a panel discussion titled The Next Frontier of AI, Daniyal Baig said that AI plays a vital role in improving products. He emphasized that AI is not the future it is the present and noted that the US has already integrated AI into its school systems. 'We have to shift the mindset that AI will lead to job losses,' he said. He added that businesses are now using 10 to 12 AI tools to market their products.
Ahsan Mashkoor said, 'Just like you're fond of food, you should embed AI into your life.' He said that Pakistan holds a significant advantage and should capitalize on AI, calling it a potential game-changer. He also stressed the need to empower youth with AI tools.
Jaya Rajwani, who moderated the panel, said the country's future lies in AI. This was followed by another panel discussion, Building a Brand that Stands Out, featuring graphic designers and moderated by Hira Fareed.
Arslan Khatri said that understanding audiences is a key to building a successful brand.
Kiran Ahmed underscored the importance of research and conceptualization in brand development.
Fatin Nawaz remarked that identifying the target audience is crucial for building strong brands.
Adnan Syed said brands exist in alignment with audience needs and should resonate with people's emotions. 'Brands are created deep inside the heart,' he added.
Maira Siddiqui, CEO of Chiragh Education Technologies, delivered a talk on her journey in promoting education in native languages.
Karachi Slush'D was a one-day event aimed at empowering the startup and entrepreneurial ecosystem. The conference brought together a vibrant community of founders, students, professionals, investors, and other key players, fostering collaboration and creativity to shape the future of innovation and economic growth in Pakistan.
Copyright Business Recorder, 2025
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Recorder
a day ago
- Business Recorder
GeoGemma win ‘Best AI Use Case' award at 2025 APAC Solution Challenge
Pakistan's student innovation took centre stage at the 2025 APAC Digital Transformation Forum as 'GeoGemma', a student team from the Institute of Space Technology (IST) in Islamabad, was awarded with the 'Best AI Use Case' Award at the 2025 APAC Solution Challenge organised by Google Developer Groups (GDG) and the Asian Development Bank (ADB). The competition brought together student-led projects from across Asia-Pacific (APAC), each of which employed Google AI tools to address critical global challenges. The award celebrates the project that effectively leveraged AI technology to develop a practical solution to pressing issues that affect our communities. GeoGemma, comprising students Ahmed Iqbal and Muhammad Abdullah in their final and second years, respectively, earned this accolade for its project that integrates satellite imagery with generative AI to address pressing environmental and geospatial issues. The jury was impressed by GeoGemma for its use of AI in multiple modalities, noting its strong technology stack and the important problem being solved. The group's use of the Gemini API is not just a feature but the core of its innovative solution to a complex and critical global problem. The project's ambition to democratize access to geospatial data through a sophisticated LLM-driven framework represents the most advanced and impactful application of Gemini among the submissions. Alongside GeoGemma, another Pakistani group, (N + 1)-th Time (Fast National University (NUCES), Islamabad Campus) was also one of the top 10 finalists of the Challenge. Comprising final year students Muhammad Huzaifa Khan and Hashim Muhammad Nadeem, the team developed a solution that helps neurodivergent users create documents more easily by providing a document editor that allows users to dictate, edit, and transform text naturally. 'We are incredibly proud to see the remarkable talent from Pakistan shine at the APAC Solution Challenge,' said Farhan Qureshi, Country Director, Google Pakistan. 'The young minds of GeoGemma and (N + 1)-th Time have demonstrated exceptional innovation and dedication, tackling some of the most critical challenges facing our world with their solutions using Gemini. GeoGemma's win of the 'Best AI Use Case' is also a testament to their impactful work, and a showcase of the thriving Pakistan developer ecosystem.' 'The APAC Solution Challenge was a cornerstone of the Asia Pacific Digital Transformation Forum 2025. We saw the power of student-led innovation to address real-world challenges in healthcare, sustainability, trade, and tourism through technology and AI. The energy and creativity of the students were truly inspiring; we are seeing the future of sustainable digital transformation in the Asia Pacific region. Other award recipients at the forum included Atempo from Konkuk University, South Korea, who won the Most Societal Impact Award for their AI-powered emergency room matching platform, and the People's Choice Award, presented to Team portfolio making group 2 from Holy Angel University, Philippines, for their waste management tracking solution.


Express Tribune
2 days ago
- Express Tribune
Top AI models show alarming traits, including deceit and threats
A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit in London. PHOTO: AFP Listen to article In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals Photo: HENRY NICHOLLS For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.


Express Tribune
4 days ago
- Express Tribune
Meta wins copyright lawsuit
A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week. However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents. Books involved in the suit include Sarah Silverman's comic memoir The Bedwetter and Junot Diaz's Pulitzer Prizewinning novel The Brief Wondrous Life of Oscar Wao, the documents showed. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." Market harming? A different federal judge in San Francisco on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission. District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival. Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.