CyCraft Launches XecGuard: LLM Firewall for Trustworthy AI
CyCraft Co-Founders (from left to right): Benson Wu (CEO), Jeremy Chiu (CTO), and PK Tsung (CISO) are leading the mission to build the world's most advanced AI security platform.
Trustworthy AI Matters
The transformative power of Large Language Models (LLMs) brings significant security uncertainty, requiring enterprises to urgently safeguard their AI models from malicious attacks like prompt injection, prompt extraction, and jailbreak attempts. Historically, AI security has been an 'optional add-on' rather than a fundamental feature, leaving valuable AI and data exposed. This oversight can compromise sensitive data, undermine service stability, and erode customer trust. CyCraft emphasizes that 'AI security must be a standard feature—not an optional add-on,' believing it's paramount for delivering stable and trustworthy intelligent services.
The Imminent Need for Proactive AI Defense
The need for immediate and effective AI security is more critical than ever before. As AI becomes increasingly embedded in core business operations, the attack surface expands exponentially, making proactive defenses an absolute necessity. CyCraft has leveraged its extensive 'battle-tested expertise across critical domains—including government, finance, and high-tech manufacturing' to precisely address these emerging AI-specific threats. The development of XecGuard signifies a shift from 'using AI to tackle cybersecurity challenges' to now 'using AI to protect AI' , ensuring that security and resilience are embedded from day one.
'AI security must be a standard feature—not an optional add-on,' stated Benson Wu, CEO, highlighting XecGuard's resilience and integration of experience from defending critical sectors. Jeremy Chiu, CTO and Co-Founder, emphasized, 'In the past, we used AI to tackle cybersecurity challenges; now, we're using AI to protect AI,' adding that XecGuard enables enterprises to confidently adopt AI and deliver trustworthy services. PK Tsung, CISO, concluded, 'With XecGuard, we're empowering enterprises to embed security and resilience from day one' as part of their vision for the world's most advanced AI security platform.
CyCraft's Solution: XecGuard Empowers Secure AI Deployment
CyCraft leads with the global launch of XecGuard, the industry's first plug-and-play LoRA security module purpose-built to defend LLMs. XecGuard provides robust protection against prompt injection, prompt extraction, and jailbreak attacks, ensuring enterprise-grade resilience for AI models. Its seamless deployment allows instant integration with any LLM without architectural modification, delivering powerful autonomous defense out of the box. XecGuard is available as a SaaS, an OpenAI-compatible LLM firewall on your cloud (e.g., AWS or Cloudflare Workers AI), or an embedded firewall for on-premises, NVIDIA-powered custom LLM servers. Rigorously validated on major open-source models like Llama 3B, Qwen3 4B, Gemma3 4B, and DeepSeek 8B, it consistently improves security resilience while preserving core performance, enabling even small models to achieve protection comparable to large commercial-grade systems.
Even small models gain enterprise-level defenses, approaching large commercial-grade performance.
Real-world validation through collaboration with APMIC, an NVIDIA partner, integrated XecGuard into the F1 open-source model, demonstrating an average 17.3% improvement in overall security defense scores and up to 30.1% in specific attack scenarios via LLM Red Teaming exercises. With XecGuard and the Safety LLM service, CyCraft delivers enterprise-grade AI security, accelerating the adoption of resilient and trustworthy AI across industries, empowering organizations to deploy AI securely, protect sensitive data, and drive innovation with confidence.
To learn more about how XecGuard can protect your LLMs and to request a demo, visit: www.cycraft.com/en/xecguard
Hashtag: #CyCraft #LLMFirewall #AISecurity
https://www.cycraft.com/
https://www.linkedin.com/company/cycraft/
https://x.com/cycraft_corp
The issuer is solely responsible for the content of this announcement.
About CyCraft Technology
CyCraftis a leading AI-driven cybersecurity company in the Asia-Pacific region. Trusted by hundreds of organizations in defense, finance, and semiconductor industries, our AI is designed to prevent, preempt, and protect against cyber threats. Our expertise has been recognized by top-tier institutions like Gartner and IDC and showcased at prestigious global conferences, including Black Hat, DEFCON, EMNLP, and Code Blue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
an hour ago
- Forbes
Why Large Language Models Are The Future Of Cybersecurity
Karan Alang, principal software engineer at Versa Networks with 25 years of experience in AI, cloud and big data. Cybersecurity today faces a key challenge: It lacks context. Modern threats—advanced persistent threats (APTs), polymorphic malware, insider attacks—don't follow static patterns. They hide in plain sight across massive volumes of unstructured data: logs, alerts, threat feeds, user activity, even emails. Traditional defenses—whether signature-based detection, static rules or first-generation ML models—while effective against known threats, struggle with the scale and complexity of modern attack vectors. They often produce false positives, and their rule-based nature means novel or sophisticated attacks are typically detected only after damage has occurred. Large language models (LLMs) have the capability to change this. Originally built to understand and generate natural language, LLMs like GPT-4, Claude, Gemini and others offer something cybersecurity desperately needs: the ability to read between the lines. They can parse logs like narratives, correlate alerts like analysts and summarize incidents with human-level fluency. But LLMs are more than just smarter tools—they're the foundation of a new kind of AI-augmented defense system. The Six Most Promising Use Cases For LLMs In Cybersecurity LLMs can analyze behavioral baselines across users and devices, identifying subtle deviations that signal insider threats or credential abuse. Unlike rigid anomaly detection models, LLMs have the capability of identifying unknown threats and can reduce false positives significantly. By ingesting log data, incident reports and threat intel, LLMs can autonomously map behaviors to relevant MITRE ATT&CK techniques. This streamlines classification and enhances threat response workflows. LLMs excel at identifying unknown threats by recognizing semantic anomalies and behavioral inconsistencies across diverse data. This makes them well-suited for detecting zero-days, novel malware or multistage attack chains with no prior signature. Phishing remains the most common initial attack vector. LLMs can parse email language, structure and embedded content to detect social engineering cues, flagging threats that evade traditional filters. Security operations centers (SOCs) are drowning in alerts. LLMs can act as AI copilots, prioritizing the most relevant incidents, summarizing them in plain English and reducing analyst fatigue. LLMs can digest unstructured threat intelligence—white papers, PDFs, X feeds—and convert them into structured indicators of compromise (IOCs) or STIX/TAXII format for machine consumption. How To Ensure LLM Accuracy: Avoiding Hallucinations In cybersecurity, an incorrect AI-generated response isn't a bug—it's a liability. LLM hallucinations must be proactively mitigated. Here's how to do it right: • Retrieval-Augmented Generation (RAG): Pair the LLM with real-time data sources (logs, threat feeds, MITRE documentation). The model then generates answers based on verified content, not just memory. • Structured Prompting: Use defined templates that limit open-ended generation (e.g., {"mitre_technique": "T1566.001", "confidence": 0.93}). • Human-In-The-Loop Validation: Analysts should review and approve high-impact outputs (e.g., containment actions, incident classification). • Audit Logging: All AI-generated recommendations should be logged, including prompt, retrieved context and final output, for post-incident review and model tuning. • Fine-Tuning + Feedback Loops: Regularly incorporate analyst feedback to improve model accuracy and contextual alignment with your environment. LLMs should not replace your SOC—they should augment it with intelligence that's explainable, traceable and verifiable. Future Outlook: Agentic AI, MCP And Agent-To-Agent Architectures LLMs are the starting point. The next generation of AI in cybersecurity will be built on three converging frontiers: Agentic systems are LLM-powered entities that can reason, plan and take action with constraints. In security, they might: They won't replace analysts—but they'll act like Tier-1 analysts on autopilot, freeing humans for more strategic work. As enterprises deploy multiple AI models across detection, analysis and response, MCPs will standardize context transfer between models: This is essential for regulated environments that require compliance-ready automation. In early-stage prototypes already used in cyber defense research, multiple specialized AI agents communicate to divide tasks: This modular, collaborative AI ecosystem will redefine cybersecurity architecture—where AI agents act like a fully staffed, scalable SOC team. Granted, these architectures are in the nascent stage, but many companies are already applying these in next-gen cyber platforms and have the potential to become mainstream as protocols, standards and guardrails mature. Final Takeaway: What Security Leaders Should Do Now LLMs are no longer an experiment—they're a strategic imperative. Here's what CISOs, CIOs, CTOs and engineering leaders should consider: Conclusion We're entering an era where AI doesn't just help detect threats—it understands them, explains them and, soon enough, will act on them with human guidance. Large language models are not just the future of cybersecurity—they're the context engine that makes the rest of your security stack smarter. Now is the time to invest—not just in the technology but in the architecture and governance needed to make it secure, reliable and impactful. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Yahoo
15 hours ago
- Yahoo
Oracle (ORCL) Stock Hits an All-Time High Amid Increased Cloud Demand
Optimism surrounding Oracle's ORCL cloud and AI infrastructure endeavors pushed its stock to an all-time high of $228 a share on Monday. This comes as the enterprise cloud leader has secured lucrative contracts thanks to its momentum in AI, with many analysts starting to raise their price targets for Oracle stock. That said, let's see if now is a good time to buy ORCL for higher highs. Although the identity of the customer was not disclosed, Oracle stock spiked +4% on Monday following news of a regulatory filing in which the company had signed a massive $30 billion annual cloud services deal. Given the scale and nature of the contract, many analysts believe the client could be a major AI player like OpenAI. To that point, Oracle and OpenAI have formed a strategic partnership, collaborating on a massive AI training hub in Texas. As part of a broader $500 billion AI infrastructure initiative, which also includes Nvidia NVDA, Oracle is the venture's core infrastructure provider. Deemed Project Stargate, Oracle will be helping to expand OpenAI's compute capacity beyond Microsoft MSFT Azure. Embarking on a multi-cloud strategy, Microsoft and OpenAI are using Oracle Cloud Infrastructure (OCI) to integrate Azure's AI cloud platform to produce enhanced training for large-language models (LLMs). With demand booming for OCI, Oracle's MultiCloud revenue is reportedly growing at over 100% in correlation with the need to support generative AI workloads such as OpenAI's ChatGPT. Reporting its fiscal fourth-quarter results earlier in the month, Oracle's Q4 sales stretched 11% year over year to $15.9 billion, pinpointing that its MultiCloud database revenue spiked 115% sequentially, driven by partnerships with Amazon's AMZN AWS and Alphabet's GOOGL Google Cloud, along with Microsoft Azure. Following today's rally, ORCL is up +30% in 2025. More impressive, Oracle stock is now sitting on +200% gains in the last three years to vastly outperform the broader indexes and its Zacks Computer-Software Market's +95%, which includes Microsoft stock at +90%. Image Source: Zacks Investment Research Furthermore, despite Oracle's blazing stock performance, at 31.3X forward earnings, ORCL still trades beneath Microsoft's 37.1X and their Zacks Computer-Software Industry average of 35.9X. Image Source: Zacks Investment Research Citing strong cloud momentum, AI infrastructure growth, and a bullish outlook for Oracle's current fiscal year 2026, several financial firms have upped their price target for ORCL shares to $250, including analysts at Stifel, UBS, and Guggenheim. At the moment, Oracle stock currently lands a Zacks Rank #3 (Hold). However, it wouldn't be surprising if a buy rating is on the way and perhaps higher highs for ORCL, as earnings estimates for Oracle are likely to rise in correlation with the announcement of its lucrative cloud services deal. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Oracle Corporation (ORCL) : Free Stock Analysis Report Inc. (AMZN) : Free Stock Analysis Report Microsoft Corporation (MSFT) : Free Stock Analysis Report NVIDIA Corporation (NVDA) : Free Stock Analysis Report Alphabet Inc. (GOOGL) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Associated Press
20 hours ago
- Associated Press
CyCraft Launches XecGuard: LLM Firewall for Trustworthy AI
TAIPEI, TAIWAN - Media OutReach Newswire - 1 July 2025 - CyCraft, a leading AI cybersecurity firm, today announced the global launch of XecGuard, the industry's first plug-and-play LoRA security module purpose-built to defend Large Language Models (LLMs). XecGuard's introduction marks a pivotal moment for secure, trustworthy AI, addressing the critical security challenges posed by the rapid adoption of LLMs. CyCraft Co-Founders (from left to right): Benson Wu (CEO), Jeremy Chiu (CTO), and PK Tsung (CISO) are leading the mission to build the world's most advanced AI security platform. Trustworthy AI Matters The transformative power of Large Language Models (LLMs) brings significant security uncertainty, requiring enterprises to urgently safeguard their AI models from malicious attacks like prompt injection, prompt extraction, and jailbreak attempts. Historically, AI security has been an 'optional add-on' rather than a fundamental feature, leaving valuable AI and data exposed. This oversight can compromise sensitive data, undermine service stability, and erode customer trust. CyCraft emphasizes that 'AI security must be a standard feature—not an optional add-on,' believing it's paramount for delivering stable and trustworthy intelligent services. The Imminent Need for Proactive AI Defense The need for immediate and effective AI security is more critical than ever before. As AI becomes increasingly embedded in core business operations, the attack surface expands exponentially, making proactive defenses an absolute necessity. CyCraft has leveraged its extensive 'battle-tested expertise across critical domains—including government, finance, and high-tech manufacturing' to precisely address these emerging AI-specific threats. The development of XecGuard signifies a shift from 'using AI to tackle cybersecurity challenges' to now 'using AI to protect AI' , ensuring that security and resilience are embedded from day one. 'AI security must be a standard feature—not an optional add-on,' stated Benson Wu, CEO, highlighting XecGuard's resilience and integration of experience from defending critical sectors. Jeremy Chiu, CTO and Co-Founder, emphasized, 'In the past, we used AI to tackle cybersecurity challenges; now, we're using AI to protect AI,' adding that XecGuard enables enterprises to confidently adopt AI and deliver trustworthy services. PK Tsung, CISO, concluded, 'With XecGuard, we're empowering enterprises to embed security and resilience from day one' as part of their vision for the world's most advanced AI security platform. CyCraft's Solution: XecGuard Empowers Secure AI Deployment CyCraft leads with the global launch of XecGuard, the industry's first plug-and-play LoRA security module purpose-built to defend LLMs. XecGuard provides robust protection against prompt injection, prompt extraction, and jailbreak attacks, ensuring enterprise-grade resilience for AI models. Its seamless deployment allows instant integration with any LLM without architectural modification, delivering powerful autonomous defense out of the box. XecGuard is available as a SaaS, an OpenAI-compatible LLM firewall on your cloud (e.g., AWS or Cloudflare Workers AI), or an embedded firewall for on-premises, NVIDIA-powered custom LLM servers. Rigorously validated on major open-source models like Llama 3B, Qwen3 4B, Gemma3 4B, and DeepSeek 8B, it consistently improves security resilience while preserving core performance, enabling even small models to achieve protection comparable to large commercial-grade systems. Even small models gain enterprise-level defenses, approaching large commercial-grade performance. Real-world validation through collaboration with APMIC, an NVIDIA partner, integrated XecGuard into the F1 open-source model, demonstrating an average 17.3% improvement in overall security defense scores and up to 30.1% in specific attack scenarios via LLM Red Teaming exercises. With XecGuard and the Safety LLM service, CyCraft delivers enterprise-grade AI security, accelerating the adoption of resilient and trustworthy AI across industries, empowering organizations to deploy AI securely, protect sensitive data, and drive innovation with confidence. To learn more about how XecGuard can protect your LLMs and to request a demo, visit: Hashtag: #CyCraft #LLMFirewall #AISecurity The issuer is solely responsible for the content of this announcement. About CyCraft Technology CyCraftis a leading AI-driven cybersecurity company in the Asia-Pacific region. Trusted by hundreds of organizations in defense, finance, and semiconductor industries, our AI is designed to prevent, preempt, and protect against cyber threats. Our expertise has been recognized by top-tier institutions like Gartner and IDC and showcased at prestigious global conferences, including Black Hat, DEFCON, EMNLP, and Code Blue.