Latest news with #GenerativeArtificialIntelligence


Techday NZ
02-07-2025
- Techday NZ
Cybercriminals use GenAI, v0.dev to launch advanced phishing
Research from Okta Threat Intelligence has found that cybercriminals are leveraging Generative Artificial Intelligence (GenAI), specifically the tool from Vercel, to manufacture sophisticated phishing websites swiftly and at scale. Okta's researchers have observed threat actors utilising the platform to create convincing replicas of sign-in pages for a range of prominent brands. According to the team's findings, attackers can build a functional phishing site by inputting a short text prompt, thereby substantially reducing the technical barrier for launching attacks. New methods The research revealed that which is intended to help developers create web interfaces through natural language instructions, is also allowing adversaries to quickly reproduce the design and branding of authentic login sites. In one case, Okta noted that the login page of one of its own customers had been imitated using this AI-powered software. Phishing sites created using often also hosted visual assets such as company logos on Vercel's own infrastructure. Okta Threat Intelligence explained that consolidating these resources on a trusted platform is a deliberate technique by attackers. By doing so, they aim to avoid typical detection methods that monitor for assets served from known malicious or unrelated infrastructures. Vercel responded to these findings by restricting access to the suspect sites and working with Okta to improve reporting processes for additional phishing-related infrastructure. The observed activity confirms that today's threat actors are actively experimenting with and weaponizing leading GenAI tools to streamline and enhance their phishing capabilities. The use of a platform like Vercel's allows emerging threat actors to rapidly produce high-quality, deceptive phishing pages, increasing the speed and scale of their operations. Wider proliferation The report also noted the existence of several public GitHub repositories that replicate the application, along with DIY guides enabling others to build their own generative phishing tools. According to Okta, this widespread availability is making advanced phishing tactics accessible to a broader cohort of cybercriminals, effectively democratising the creation of fraudulent web infrastructure. Further monitoring revealed that attackers have used the Vercel platform to host phishing sites imitating not just Okta customers, but also brands like Microsoft 365 and various cryptocurrency companies. Security advisories related to these findings have been made available to Okta's customers. Implications for security Okta Threat Intelligence underlined that this represents a significant change in the phishing threat landscape, given the increasingly realistic appearance of sites generated by artificial intelligence. The group stressed that safeguarding systems using traditional indicators of poor quality or imperfect design is now insufficient for deterrence. Organizations can no longer rely on teaching users how to identify suspicious phishing sites based on imperfect imitation of legitimate services. The only reliable defence is to cryptographically bind a user's authenticator to the legitimate site they enrolled in. This is the technique that powers Okta FastPass, the passwordless method built into Okta Verify. When phishing resistance is enforced in policy, the authenticator will not allow the user to sign into any resource but the origin (domain) established during enrollment. Put simply, the user cannot be tricked into handing over their credentials to a phishing site. To address these risks, Okta Threat Intelligence has recommended several mitigation strategies. These include enforcing phishing-resistant authentication policies and prioritising the deactivation of less secure factors, restricting access to trusted devices, requiring secondary authentication if anomalous user behaviour is detected, and updating security awareness training to account for AI-driven threats. The research reflects the rapid operationalisation of machine learning tools in malicious campaigns, and highlights the need for continuous adaptation by organisations and their cybersecurity teams in response to evolving threats. Follow us on: Share on:


Borneo Post
11-06-2025
- Business
- Borneo Post
Can you beat the machine in your next job application?
Using AI tools in hiring has both benefits and challenges – on one hand, it can help reduce human bias, but there are also growing concerns about fairness and transparency. — Bernama photo IN today's fast-changing job market, landing your dream job may no longer depend solely on impressing a human recruiter. Increasingly, the 'first person' reviewing your application might be a machine. Artificial intelligence (AI) is transforming how companies hire new staff, from sorting résumés and scoring interviews. Job-seekers must learn how to stand out in this new digital era to get a job. But how does it work? Use of AI tools in hiring AI tools have become popular and are relatively cost-effective to use, thanks to Generative Artificial Intelligence (Gen AI) and Large Language Model (LLM). There are a multitude of AI tools for various management functions, including the very important recruitment and selection functions of human resource (HR) management. Many companies—from technology giants to medium-sized enterprises—in one way or another, are using AI tools to make recruitment faster, cheaper, more efficient, and objective. These tools help HR teams handle thousands of applications using algorithms to screen résumés, analyse pre-recorded video to assess applicants' skills and personality traits. Some AI tools, such as Applicant Tracking System (ATS), could scan résumés and filter out those who do not match the job specification (JS). Others could record candidates' responses and analyse facial expressions, voice tone, and word choices. For example, some multinational firms are already using software like 'HireVue' or 'Pymetrics to evaluate job applicants. These platforms claim to offer unbiased assessments. However, for the interviewees, it can be a daunting endeavour as they are, in effect, trying to impress a robot without knowing the rules of the game. Winning the systems So, how can job-seekers beat the machine and move their application forward? The first step is to understand how AI tools screen résumés. Many ATS can search for keywords in the résumés that match the job description. If your résumé does not contain the right words or is written in a way the AI tools cannot read, you may be rejected instantaneously. This means it is essential to use keywords from the job advertisement, use simple formatting (no tables, columns, or graphics), and customise your résumés for each application. Next, for AI-powered video interviews, just like human face-to-face interviews, preparation is of utmost importance. These systems may rate you based on confidence, eye contact, clarity, and even enthusiasm. Yes, you will be surprised! Some tips for the interviewees are practise speaking in front of a camera and review the recordings; and also speak in front of your friends and receive honest feedback from them for improvement. Remember to stay calm, keep the answers clear and concise so that the AI tools can pick up the keywords easily. Show natural body language and smile as these highly-sophisticated AI tools are trained to interpret your emotions with some degree of accuracy. Therefore, be your best self, but stay authentic. After all, AI tools are predominantly just the first screening process – the gatekeeper if you like. The basic principles of showing interest, confidence, clarity and passion for the job you are interviewing for are still essential, even to machines. Challenges of AI in hiring Having sung all the praises of AI tools, the reality is they are not perfect. Using AI tools in hiring has both benefits and challenges. On the positive side, it can help reduce human bias. Theoretically, an AI system is not concerned about your name, gender, social status, age or where you graduated from. This could help level the playing field, especially for candidates from less well-known or disadvantaged backgrounds. However, there are growing concerns about fairness and transparency in using AI tools. Algorithms can reflect the biases of their creators or unintentionally favouring certain language patterns or personality types. It may favour certain speaking styles or penalise people with different accents or expressions. Some job-seekers are concerned that they may be rejected not because they lack the required knowledge, skills and abilities (KSA), but rather the AI tools do not 'understand' them well. For job-seekers in non-English speaking countries like Malaysia, this can be even more challenging. Although there are recent powerful AI tools from China, most of them were developed in Western countries using the Western contexts. Hence, many AI tools may misinterpret accents, gestures, or even grammar in the Asian context and culture. In view of this phenomenon, there is a need for more ethical and inclusive AI systems in hiring, especially for multinational companies. Despite these challenges, the use of AI tools in recruitment and selection process is here to stay. Job-seekers must adapt by learning how AI tools work and prepare in earnest. The tips showcased in this article can help turn the machine from an obstacle into an advantage. Human intelligence matters In a world where machines read your résumé and judge the video interviews, the best way to beat the machine is to stay one step ahead. Even though machines are part of the hiring process, most final decisions are still made by humans like you and me. Therefore, once you get past the initial AI screening process, your ability to connect with real people will take the centrestage. Bring your best self to the interviews, share your story, show your passion and speak with conviction from the depth of your heart. Never forget you are a human being who possesses unique traits which no machine can ever beat: emotional intelligence that allows you to understand and respond to feelings; and creativity and critical-thinking to drive innovation and problem-solving that machines cannot replicate. Capitalise on these precious talents, give your best, and lead the way forward! * The opinions expressed in this article are the author's own and do not necessarily reflect the view of Swinburne University of Technology Sarawak Campus. Prof Fung is the head of School of Business at Swinburne University of Technology Sarawak Campus, while Prof Chung is an Associate Professor in Human Resources Management in the same School.


Express Tribune
27-05-2025
- Science
- Express Tribune
GenAI in education: between promise and precaution
The writer is a Professor of Physics at the University of Karachi Listen to article Generative Artificial Intelligence (GenAI) is rapidly reshaping the landscape of education and research, demanding a thoughtful and urgent response from educators and policymakers. As a faculty member and a member of the Advanced Studies and Research Board at a public sector university, I have witnessed both the excitement and the uncertainty that AI tools like ChatGPT have generated within academic circles. While the potential of GenAI to enhance learning and scholarly productivity is undeniable, its unregulated and unchecked use poses significant risks to the core principles of academic integrity, critical thinking and equitable access to knowledge. As Pakistan embraces digital transformation and positions itself within the global digital economy, AI literacy has emerged as a foundational competency. In an earlier op-ed published in these columns on October 5, 2024 entitled 'AI Education Revolution', I emphasised that AI literacy is not just a technical skill, but a multidisciplinary competence involving ethical awareness, critical thinking and responsible engagement. That argument is now even more relevant. With tools like ChatGPT and DALL•E becoming commonplace, students must be equipped to not only use them effectively but to understand their societal and epistemological implications. GenAI offers immense opportunities. It enables personalised learning, streamlines research, provides real-time feedback and enhances access to complex knowledge. For students in under-resourced areas, it can bridge educational gaps. For researchers, it reduces the cognitive burden of information overload. But with these capabilities comes the risk of over-reliance. The seamless generation of essays, analyses and even ideas without meaningful engagement undermines the very purpose of education — cultivating independent thought and inquiry. One of the most pressing issues is the shift in how students perceive learning. Many now use AI tools as shortcuts, often without malintent, bypassing critical processes of reasoning and originality. This trend not only threatens academic rigour but fosters a culture of passive dependence — something that was forewarned in the context of AI misuse and unintentional plagiarism in academic settings. As discussed in the earlier op-ed, the absence of AI literacy can blur the lines between learning and copying, between thinking and prompting. To address these risks, UNESCO's recent guidance on AI in education offers a valuable framework. Governments must legislate clear, enforceable policies around age-appropriate use, data protection and algorithmic transparency. Educational institutions must rigorously assess the pedagogical validity and ethical dimensions of AI tools before integrating them. But perhaps the most crucial intervention lies in embedding AI literacy directly into curricula across disciplines but as a horizontal skill akin to critical thinking or digital citizenship. Hands-on engagement with GenAI is essential. Students must not only generate content but also critically evaluate it for bias, coherence and accuracy. To support this, assessments should evolve — emphasising oral presentations, collaborative projects and reflective analysis to promote authentic learning. Educators, too, must adapt through targeted training that enables them to guide students responsibly. Institutions should support this shift with updated pedagogical strategies and professional development programmes that integrate AI while preserving academic integrity. Given AI's borderless nature, international cooperation is vital. UNESCO must continue leading efforts to establish shared ethical frameworks and best practices. Pakistan should actively engage in this global dialogue while strengthening local capacity through curriculum reform, infrastructure investment and academic-policy collaboration to ensure GenAI serves as a responsible and equitable tool for learning. GenAI is not a passing phase, it is a structural shift. Whether it becomes a tool for democratising knowledge or a force that erodes educational values depends on how we act today. The future of education will not be determined by machines alone, but by the wisdom with which we choose to engage with them.


Entrepreneur
14-05-2025
- Business
- Entrepreneur
Are Indian Employees Becoming Overly Dependent on AI?
"Overdependence on AI becomes a shortcut for thinking, and that's not how companies can be built," Sanjay Varnwal, CEO and Co-founder, Spyne Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. When we talk about Generative Artificial Intelligence (GenAI), the usual image that comes to mind is of a smart assistant that boosts productivity and simplifies work. So, it's no surprise that employees are increasingly using AI tools at the workplace. What's surprising, however, is that a majority feel they can't work without them. According to KPMG's Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025, 67 per cent of Indian respondents said they couldn't complete their work without AI, and 71 per cent admitted to using AI tools rather than learning how to do tasks themselves. This rising dependence is prompting an important question—is India relying on AI at the cost of critical thinking and accountability? AI is not the ultimate truth Sanjay Varnwal, CEO and Co-founder, Spyne, acknowledges AI's contribution to productivity but draws a clear line between assistance and overreliance. "We encourage our teams to embrace AI, it helps them move faster, automate the repetitive, and scale what was previously unscalable. AI is like a research assistant laying the foundation for real work, be it writing code or building a strategy. But human judgement is non-negotiable. We teach our teams where to draw the line." That balance, Varnwal adds, is what ensures AI remains a tool, not a crutch. The report also notes that 73 per cent of employees admitted to making mistakes due to AI, and 72 per cent acknowledged misusing it in ways that violated policy. For Varnwal, this points to a deeper issue, not with the technology itself, but how it's being used. "The real risk with AI in the workplace isn't the technology, it's complacency," he says. "We hold employees accountable for results. We don't just offer tools; we expect them to use them ethically and strategically." He believes it is this culture of "extreme ownership" that has helped Spyne scale 5x in 15 months, despite being an AI-first company. The future isn't AI vs humans—it's humans with better tools Deepak Ravindran, Founder & CEO, KiranaPro, echoes the importance of balance. While AI is central to their retail operations including voice-based ordering and personalised recommendations but human intelligence still leads the way. "We see AI as a means to amplify human potential, not replace it," says Ravindran. "Every team member is trained to understand and use AI thoughtfully. AI assists, but people lead." To address misuse and errors,"Our internal AI is auditable and explainable. Mistakes are treated as learning opportunities, but misuse is dealt with firmly. Accountability in AI is about conscious design, not just compliance," he adds. The speed of AI adoption in India has outpaced regulation. According to the same study, only 41 per cent of employees are aware of existing AI policies. Ravindran cautions that while India's AI momentum is commendable, the lack of policy awareness could backfire. "We need a middle path where innovation isn't stifled, but is guided by ethical frameworks and inclusive policies," he says. Vara Kumar Namburu, Co-founder & Head of R&D and Solutions, Whatfix feels, "As AI emerges as a key driver of innovation in India, its true strength lies in its ability to simplify complexity and power smarter, more efficient workflows. This is where the human element becomes crucial. Without clarity or confidence, we struggle to adopt new technologies effectively." On the other hand, Varnwal sees the speed as a strength if handled responsibly. "Indian enterprises are absolutely moving faster than regulation and frankly, that's a good thing. But speed without a seatbelt is risky," he warns. "AI done right can be India's global advantage. AI done wrong will be our Achilles' heel." He concludes with a reminder that while AI can build efficiency, it cannot replace original thinking. "Overdependence on AI becomes a shortcut for thinking, and that's not how companies can be built. AI models are getting trained on what's already out there in the world, and we need human intelligence to create something original, ground-breaking, and valuable."
Yahoo
03-05-2025
- Business
- Yahoo
TCS expands partnership with SAP to drive cloud adoption
India's Tata Consultancy Services (TCS) has expanded partnership with SAP to facilitate business transformation for SAP customers through the adoption of Generative Artificial Intelligence (GenAI). This move aims to enhance scalability, agility, and innovation across enterprises. To expedite the adoption of enterprise-wide cloud technologies, TCS and SAP will jointly work to support customers through the 'RISE with SAP' initiative. This programme is designed to simplify the transition from traditional on-premises infrastructure to modern cloud environments. TCS's plan includes working closely with SAP to create a centralised ecosystem for global customers to improve service management and end-user experiences. TCS Technology, Software and Services, president V Rajannasaid: 'Over the past two decades, TCS and SAP have consistently delivered industry-leading solutions, empowering global enterprises on their digital transformation journeys. 'As we embark on the next phase, we remain committed to creating sustainable value and fostering growth for our customers. Together, we will continue to transform end-user experiences and drive innovation across the enterprise landscape.' To foster innovation, TCS is planning to set up an Innovation Council, which will utilise the Agile Innovation Cloud (AIC) framework. The council will focus on driving innovation in key areas such as AI democratisation, advancing GenAI, and enhancing automation ecosystems, with the goal of enabling large-scale innovation for SAP customers. SAP Customer Services & Delivery executive board member Thomas Saueressig said: 'Our collaboration with TCS continues to drive meaningful impact for customers by bringing together leading cloud solutions and proven delivery expertise. 'Together, we are helping organisations simplify their transformation journeys, accelerate cloud adoption, and harness the power of AI and data.' In addition, the company will leverage its TCS Pace Port innovation network, which spans 12 major cities globally, to foster collaboration and develop solutions with SAP customers. The TCS Pace network is designed to promote systematic, scalable, and sustainable innovation within enterprises. TCS Enterprise Solutions global head Vikram Karakoti said: 'TCS looks forward to building on its 20-year partnership with SAP to launch an accelerated path to RISE with SAP adoption and E2E automation with GenAI. 'TCS enjoys a 360° relationship with SAP, and, together, we provide our clients with seamless and flexible digital cloud adoption, reinforcing operational resilience and efficiency. The new endeavour combines our agile, scalable methodologies with cutting-edge GenAI innovations to help global enterprises adapt, grow, and unlock new opportunities through technology.' In February 2025, TCS teamed up with Salesforce to enhance AI solutions for the manufacturing and semiconductor industries. "TCS expands partnership with SAP to drive cloud adoption" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Sign in to access your portfolio