
Cequence Security Enhances Unified API Protection Platform For Agentic AI
There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognizing this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.
Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence's ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform's unmatched readiness for securing agentic AI in the wild.
Key enhancements to Cequence's UAP platform include: Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information. Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform's intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform's intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss. Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers' internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers' internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage. Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer's existing cybersecurity ecosystem. This simplifies management and security enforcement.
Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.
'We've taken immediate action to extend our market-leading API security and bot management capabilities,' said Ameya Talwalkar, CEO of Cequence. 'Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That's our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.'
These extended capabilities will be generally available in June. 0 0
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


TECHx
10 hours ago
- TECHx
Why You Shouldn't Use ChatGPT for Therapy
Home » Middle East » Events » Why You Shouldn't Use ChatGPT for Therapy Think your ChatGPT chats are private? They are not protected by law. Find out how to protect your data and use AI more safely. As more people turn to artificial intelligence for advice, support, and even emotional relief, a recent statement by OpenAI CEO Sam Altman is a stark reminder: your AI conversations are not confidential. Speaking on Theo Von's This Past Weekend podcast, Altman openly warned that personal chats with ChatGPT can be subpoenaed and used as legal evidence in court. 'If you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, like we could be required to produce that,' he said. 'And I think that's very screwed up.' Altman's comment has raised important questions about digital privacy, mental health, and how people are using AI tools for emotional support often without realizing the risks. But instead of focusing only on the problem, let's shift the spotlight to what really matters: what should users do to protect themselves when interacting with AI tools like ChatGPT? Here's a practical breakdown of steps you can take right now. AI Is Not a Therapist It's crucial to start with this baseline truth: ChatGPT is not a therapist. While it can simulate empathy and give advice based on large datasets, it is not governed by medical ethics or bound to doctor-patient confidentiality. Conversations with therapists are protected under laws such as HIPAA in the US or similar healthcare privacy regulations elsewhere. ChatGPT doesn't fall under any of these. That means anything you share could, under certain circumstances, be accessed by third parties, especially in legal proceedings. If you wouldn't want something to appear in a court transcript or investigation, don't share it with an AI chatbot. Don't Overshare Many users feel safe sharing intimate thoughts with chatbots after all, there's no human on the other end to judge you. But emotional safety doesn't equal data security. Avoid entering specific personal details like: Full names Home or work addresses Names of partners, children, or colleagues Financial information Descriptions of illegal behavior Admissions of guilt or wrongdoing Even if your conversation seems anonymous, metadata or patterns of usage could still connect it back to you. Use ChatGPT for ideas, brainstorming, and general advice, not confessions, emotional breakdowns, or personal disclosures that you wouldn't say in a public setting. Turn Off Chat History OpenAI allows users to turn off chat history, which prevents conversations from being used to train future models or stored long term. While this feature doesn't offer absolute protection (some data may still be stored temporarily), it's a strong step toward reducing what's kept on file. Here's how you can disable it: Go to Settings Click on Data Controls Turn off Chat History & Training Disabling history gives you greater control over what's retained, even if it doesn't erase all risk. Stay Anonymous If you're testing ideas or exploring sensitive topics through AI, avoid logging in through accounts that use your real name, email, or work credentials. This creates a layer of distance between your identity and the data. For added safety, avoid discussing location-specific events or anything that could link your usage back to real-world situations. The less identifiable your data, the harder it becomes to trace it back to you in legal or investigative scenarios. Don't Rely on AI When You're Most Vulnerable AI isn't equipped to handle real-time emotional crises. While it might seem responsive, it's not trained to recognize or escalate life-threatening issues like suicidal ideation, abuse, or trauma the way licensed therapists or crisis helplines are. If you're in a vulnerable place emotionally, it's better to: Call a crisis hotline Speak to a therapist Talk to a trusted friend or family member Emotional support should come from trained professionals, not algorithms. Read the Fine Print It might not be thrilling reading, but OpenAI's privacy policy spells out how your data is handled. Other platforms that use AI chatbots may have similar policies. Important things to look for include: How long your data is stored Whether conversations are used to train the model Under what conditions data may be shared with third parties Your rights to delete your data Knowing the rules helps you stay in control of your digital footprint. Want Change? Push for AI Privacy Laws As AI tools continue evolving, the legal system is lagging behind. There are no clear global standards on how AI conversations should be protected, especially when used for pseudo-therapeutic purposes. If you believe these tools should be more private, support efforts to push for ethical AI frameworks, stronger data protection laws, and clearer consent structures. The more users demand transparency and protection, the more pressure there will be for tech companies and regulators to act. Don't just be a passive user. Be part of the change. Before You Hit Send, Ask Yourself This Sam Altman's candid remark is more than just a caution, it's a call to action for users to be informed and intentional. AI chatbots like ChatGPT can be helpful tools, but they're not private, they're not therapists, and they're not above the law. As tempting as it might be to treat AI like a journal or confidant, the digital trail you leave could have real-world consequences. By being aware of the risks and taking proactive steps, you can still benefit from the power of AI, without putting yourself in a vulnerable legal or personal position. So the next time you start typing out something deeply personal, pause for a second and ask yourself: Is this something I'd be comfortable explaining in a courtroom? If the answer is no, it's better left unsaid, at least to a chatbot.


Arabian Business
17 hours ago
- Arabian Business
Dubai's DEWA processes 7.2 million digital transactions in first half of 2025
Dubai Electricity and Water Authority (DEWA) has processed more than 7.2 million transactions across its digital platforms during the first half of 2025, achieving a digital service adoption rate of 99.5 per cent. The transactions were distributed across multiple channels: 1.1 million through the DEWA website, 2.6 million via the smart app, and 3.5 million through partner-supported platforms. DEWA enabled customers to seamlessly complete more than 7.2 million transactions across several digital platforms during the first half of 2025. #DEWANews — DEWA | Official Page (@DEWAOfficial) July 28, 2025 DEWA's digital excellence reaches new heights The utility completed more than 100 integration projects with 65 government and private organisations by the end of June 2025. Saeed Mohammed Al Tayer, MD & CEO of DEWA, said: 'In line with the wise directives of His Highness Sheikh Mohammed bin Rashid Al Maktoum, Vice President and Prime Minister of the UAE and Ruler of Dubai, we continue our tireless efforts to enhance the quality of digital life and accelerate the digital transformation process in DEWA and the emirate of Dubai. 'We are keen to advance our leadership in employing AI innovation and the latest technologies to provide more efficient, effective and quality services, and to develop innovative digital solutions that enhance the experience and happiness of stakeholders, helping to reduce their carbon footprint and supporting sustainability efforts.' Al Tayer said the organisation remains committed to advancing leadership in artificial intelligence innovation and technology deployment to provide more efficient services and develop digital solutions that enhance stakeholder experience whilst reducing carbon footprint and supporting sustainability efforts. 'At DEWA, we have a secure and advanced digital infrastructure that keeps pace with our ambitions for digital transformation and our efforts to make Dubai a global centre for innovation and technology. We adopt the 'Services 360' policy in all our services to reduce procedures, achieve zero bureaucracy and help to establish a leading global system in government work,' Al Tayer added. DEWA operates a comprehensive ecosystem of digital channels, including its website, smart app and customer care centre systems. All platforms operate through green data centres that rely entirely on clean energy. The utility provides services through Rammas, its virtual employee supported by ChatGPT. Rammas is available across multiple platforms, including DEWA's website, smart app, Facebook page, Google Home, service robots, WhatsApp Business at 04-601 9999 and Amazon's Alexa.


Khaleej Times
17 hours ago
- Khaleej Times
Microsoft's AI edge under scrutiny as OpenAI turns to rivals for cloud services
Microsoft investors head into Wednesday's earnings with one big question: is the company's artificial intelligence edge at risk as partner OpenAI turns to rivals Google, Oracle and CoreWeave for cloud services? Exclusive licensing deals and access to OpenAI's cutting-edge models have made Microsoft one of the biggest winners of the generative AI boom, fueling growth in its Azure cloud business and pushing its market value toward $4 trillion. In the April-June quarter, the tie-up is expected to have driven a 34.8% increase in Azure revenue, in line with the company's forecast and higher than the 33% rise in the previous three months, according to data from Visible Alpha. But that deal is being renegotiated as OpenAI eyes a public listing, with media reports suggesting a deadlock over how much access Microsoft will retain to ChatGPT maker's technology and its stake if OpenAI converts into a public-benefit corporation. The conversion cannot proceed without Microsoft's sign-off and is crucial for a $40 billion funding round led by Japanese conglomerate SoftBank Group, $20 billion of which is contingent on the restructuring being completed by the end of the year. OpenAI, which recently deepened its Oracle tie-up with a planned 4.5 gigawatts data center capacity, has also added Google Cloud among its suppliers for computing capacity. UBS analysts said investor views on the Microsoft–OpenAI partnership are divided, though the software giant holds an upper hand. "Microsoft's leadership earned enough credibility … such that the company will end up negotiating terms that will be in the interest of its shareholders," the analysts said. Some of that confidence is reflected in the company's stock price, which has risen by more than a fifth so far this year. In the April-June period, Microsoft's fiscal fourth quarter, the company likely benefited from a weaker dollar, stronger non-AI Azure demand and PC makers pulling forward orders for its Windows products ahead of possible U.S. tariffs. Revenue is expected to have risen 14% to $73.81 billion, according to data compiled by LSEG, its best growth in three quarters. Profit is estimated to have increased 14.2% to $25.16 billion, slightly slower than the previous quarter as operating costs rose. Capital spending will also be in focus after rival Alphabet raised its annual outlay by $10 billion last week. Microsoft has repeatedly said it remains capacity constrained on AI, and in April signaled continued growth in capex after planned spending of over $80 billion last fiscal year, though at a slower pace and on shorter-lived assets such as AI chips. Dan Morgan, senior portfolio manager at Synovus Trust who owns Microsoft shares, said the spending has been paying off. "Investors may still be underestimating the potential for Microsoft's AI business to drive durable consumption growth in the agentic AI era."