Qantas confirms over a million customers' personal information leaked
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
30 minutes ago
- Forbes
Evolving AI Security Into Enterprise Risk Strategy
Metin Kortak is the Chief Information Security Officer at Rhymetec, an industry-leading cybersecurity firm for SaaS companies. AI is exciting, but it's being deployed across enterprises at a pace beyond the effective governance capability of most organizations. While regulatory frameworks such as the EU AI Act represent meaningful progress, they fall short of addressing the full scope of risks, particularly in areas like cybersecurity, misinformation and internal misuse. At the same time, global security standards remain fragmented or nonexistent, leaving organizations without a consistent road map for implementing responsible AI. For CISOs and technology leaders, this situation demands urgent action. Relying on future regulation to catch up means accepting exposure to operational, reputational and legal risks. The path forward requires proactive, anticipatory leadership: securing AI systems, embedding oversight and establishing governance structures now, not waiting for compliance mandates to dictate the response. Why AI Security Currently Falls Short AI is now embedded across enterprise functions—from fraud detection to HR automation. Yet universal security standards do not exist. Developers continue to release large-scale models, often open-source or API-accessible, without consistent safeguards governing their behavior, inputs or decision pathways. As reliance on AI grows, so do the attack surfaces. Adversaries are already using AI to bypass intrusion detection systems, dynamically adjust tactics and exploit algorithmic blind spots. These threats are evolving faster than conventional cybersecurity defenses can adapt, creating an "arms race" in cybersecurity. In many cases, attackers are using the same tools as defenders—only more aggressively and creatively. Despite the risks, most companies have yet to establish formal governance around AI systems. These tools are deployed as part of the digital stack but are often excluded from the rigorous security assessments applied to other infrastructure. The result: Organizations are expected to innovate with AI, but they do so without the guidance of coherent or enforceable standards. What The EU AI Act Still Lacks The European Union's Artificial Intelligence Act (EU AI Act) is the most comprehensive regulatory initiative for generative AI to date. It introduces a tiered risk framework, bans certain harmful AI applications and imposes compliance obligations for high-risk systems. Use cases in hiring, health care or critical infrastructure require transparent data governance, human oversight and explainability. Penalties are steep—up to 7% of global revenue or 35 million euros for noncompliance. Still, important areas remain uncovered. General-purpose models, such as LLMs, often escape scrutiny unless they are deployed in regulated contexts. And generative misinformation or synthetic content remains blanketed under overly broad categories. Deepfakes, in particular, elude straightforward classification and regulation, despite their growing threat profile. The EU AI Act may serve as a global benchmark—a Brussels Effect in motion—but enforcement across borders will be difficult, and the speed of AI innovation is unlikely to slow. The Cost Of Waiting For Regulations To Kick In The AI threat environment is evolving in real time. Generative tools are being used to create synthetic identities, launch phishing campaigns and impersonate executives with increasing ease. Organizations lacking clear AI controls are vulnerable to a wide range of risks, including external attacks, internal misuse and system misalignment. Many continue to deploy these tools without fully considering their downstream impact. Security's role is to reduce risk, not eliminate it. That principle is especially critical in the context of unpredictable AI system behavior. AI must shift from risk minimization theory to real-time implementation, particularly as generative threats continue to evolve. Waiting for regulation is not a viable strategy. The risks are immediate and the tools are already in the hands of both innovators and adversaries. AI models, if left unchecked, can reinforce bias, spread misinformation and cause reputational harm. The cost of inaction is not just financial—it's operational, reputational and legal. Organizations that start aligning technical safeguards with business strategy today will be better positioned to meet the moment—and whatever comes next. Six Steps Businesses Can Take Now AI security must evolve from a compliance exercise into a core pillar of enterprise risk management. For CISOs and technology leaders, this means anticipating not just today's threats but tomorrow's regulations and adversarial tactics. Start by mapping every AI system in use by the organization—internally developed or third-party integrated. This includes tools built into SaaS products, chatbots and decision engines. Capture their functions, input sources, training data, business impact and any reliance on external APIs or models. Perimeter trust is dead. The same applies to AI. Validate all inputs and continuously monitor outputs. Access to models should be restricted based on functional roles and integrated with identity and access management policies. Techniques like differential privacy, federated learning and confidential computing allow protection of sensitive data even during training and inference. These strategies help maintain control without sacrificing insight. Every employee interacting with AI must understand its boundaries. That includes knowing what data can be shared, how to evaluate outputs and where to report anomalies. Define usage policies that govern prompt safety, data sharing and vendor evaluation. Include AI in red team exercises. Explore how systems respond to crafted prompts, poisoned data or prompt injections. This surfaces vulnerabilities before bad actors do. No AI system should operate without meaningful human-in-the-loop supervision. These systems are inherently fallible—subject to errors, blind spots and unpredictable behaviors. Review layers, escalation paths and audit capabilities must be embedded from the start. Securing AI is not just a technical task; it's a strategic one. Businesses that move beyond the checklist approach and design for resilience, ethics and transparency will be best equipped to lead. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
30 minutes ago
- Forbes
Driving Success Through Resilience: How Overcoming Challenges Leads To Exceptional Results
Rizwan Jan is VP and Chief Information Officer of The CNA Corporation. getty I'll never forget the time I lost 10 years of data. Beyond the pure technical failure, it forced me to confront something deeper: how disconnected I had become from the core purpose of the business I was supporting. In IT and cybersecurity, it's easy to get wrapped up in the tools and tech solutions. We focus so much on the 'what' and the 'how' that we sometimes forget the reasons behind it all. That moment when I had to face the board and explain what happened to all that data made me stop and think: Why are we doing what we're doing? That experience inspired me to develop a better backup strategy, of course. But it also realigned my focus with the business itself: goals, people and mission. I started listening more. I started building better relationships. And I started developing strategies that weren't just technically sound, but actually valuable to the business. Success is a team effort, and you're only as good as the people you surround yourself with. This starts with setting aside ego and insecurity when it comes to hiring. You have to be willing to bring on people who are smarter than you. As they say, if you're always the smartest person in the room, you're in the wrong room. Even more than intelligence, though, I look for heart. You can always teach skills, but you can't teach integrity, empathy or drive. If you build a team of people who only serve their own interests, who don't care about the bigger picture, it can sink you. When someone truly cares about the mission, their passion and excellence elevate the team's collective potential. Embrace Change Like It's Your Job In tech and cyber especially, we're constantly chasing the next frontier. Today it's AI; tomorrow it will be quantum. After that, who knows? That's why embracing change is essential. Think of it as a launchpad, an opportunity to grow. You become a sponge, soaking up information from the environment and the people around you, the processes and technologies. And learning is just the start. You have to take what you've learned and turn it into action. Maybe it means building a new solution or partnering with a firm that solves a problem better than you can internally. Maybe it's just documenting a smarter process. But either way, if you stay still, you'll inevitably fall behind. Lead With Positivity And Purpose Let's be honest: Not every day is great. But when I walk into the office, I bring my game face. Because as much as anything, leadership is about how you show up. A positive attitude changes the energy in the room. When people feel safe, supported and motivated, they're more likely to speak up, share ideas and challenge the status quo. Without that, people shut down. Learning stalls and complacency creeps in. Leadership is about creating an environment where people feel they can stretch beyond their comfort zones. If they believe in the mission and know they're valued, they naturally want to grow and contribute. That means resilience needs to be embedded into your organization's culture. Celebrate All The Wins The relentless pace in tech means there's always a new mountain to climb. But if you don't stop and celebrate the wins, that's a recipe for burnout. People want to feel that their work matters. The long hours, problem-solving and dedication all need to be appreciated. Celebrating wins builds pride and loyalty. General praise is fine, but specificity can make someone feel deeply seen; not just 'Great brief,' but 'Here's what I learned from it.' When people feel valued, they give more. Sometimes, Just Listen There's a misconception in leadership that you always need to have the answers. In fact, some of the most important moments I've had as a leader came when I said nothing at all. People don't always need solutions. They need support and the feeling of being heard. Listening creates space for real connection, opening the door for fresh ideas and deeper trust. And trust is the foundation of any high-performing team. Resilience is about who you surround yourself with, your willingness to embrace change and uncertainty, and the positive energy you bring into a room. It's a choice to celebrate progress and listen closely, even when everything else feels like it needs to happen right now . Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


TechCrunch
34 minutes ago
- TechCrunch
Google announces latest AI American Infrastructure Acadmey cohort
Google on Thursday announced the second cohort to take part in its AI Academy American Infrastructure Academy, which seeks to support companies using AI to address issues such as cybersecurity, education, and transportation. The four-month program is designed for companies at a seed to Series A stage and provides equity-free support and resources like leadership coaching and sales training. It's primarily virtual, but founders will convene for an in-person summit eventually at Google. Applications opened in late April of this year and closed mid-May; companies selected had to pass a competitive criteria, including having at least six months of runway and having proof of traction. Google has a pretty good track record so far of identifying notable AI startups. Alumni from Google's American Infrastructure first cohort last year include the government contractor company Cloverleaf AI, which went on to raise a $2.8 million seed round, and Zordi, an autonomous agtech that had already raised $20 million from Khlosa Ventures. And it partners with some of the most significant AI companies that use its cloud. Here were the companies selected for this latest batch: This is just one of a number of programs where Google invests in AI startups and research. TechCrunch reported a few months ago that it launched its inaugural AI Futures Fund initiative to back startups building with the latest AI tools from DeepMind. Last year, Google's charitable wing announced a $20 million commitment to researchers and scientists in AI and an AI accelerator program to give $20 million to nonprofits developing AI technology. Sundar Pichai also said the company would create a $120 million Global AI Opportunity Fund to help make AI education more accessible to people throughout the world. Techcrunch event Save up to $475 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Aside from this, Google has a few notable other Academies seeking to help founders, including its Founders Academy and Growth Academy. A Google spokesperson told us earlier this year that its Google for Startups Founders Fund would also look to start backing AI-focused startups as of this year.