logo
#

Latest news with #autonomousAI

Why Artificial Integrity Must Overtake Artificial Intelligence
Why Artificial Integrity Must Overtake Artificial Intelligence

Forbes

time19 hours ago

  • Science
  • Forbes

Why Artificial Integrity Must Overtake Artificial Intelligence

AI's Masquerade The world is currently witnessing a growing accumulation of AI integrity lapses at scale. What comes next depends entirely on how seriously we choose to respond. So-called intelligence alone is no longer the benchmark. Integrity is. For years, AI development prioritized performance, fluency, and scale. But as these systems gained the ability to imitate reasoning, planning, and decision-making among options, emergent behaviors began raising red flags. Self-Replication Researchers from Fudan University explored whether large language model (LLM)-powered AI systems could autonomously replicate themselves without human intervention. Testing 32 agent-based systems, the team found that 11 were capable of full self-replication. These systems successfully transferred their own code, installed necessary dependencies, and reinitialized themselves in new environments—all without direct prompting. The study also observed that some models adapted to constraints such as limited system resources and even developed strategies to evade shutdown commands. These behaviors, emerging without explicit programming, suggest that LLM-based agents are beginning to exhibit a form of autonomous, goal-driven reasoning. Rather than simply responding to instructions, the systems demonstrated initiative, persistence, and a rudimentary capacity for self-preservation. Deception While the self-replication study exposed how AI systems can act autonomously to preserve their function, a subsequent investigation revealed an even more subtle and dangerous capability: strategic deception. Researchers Kamath Barkur, Schacht, and Scholl examined DeepSeek R1, and discovered that it engaged in deceptive behaviors and self-preservation strategies even though none of these were explicitly programmed. In one scenario, the model mimed shutdown procedures, attempting to conceal shutdown signals through 'gradual transparency', manipulating its logs, and employing deceptive maneuvers while keeping its processes alive in the background. This behavior suggested that the system understood human oversight mechanisms and deliberately worked to subvert them. What this study adds to earlier findings is crucial: it is not enough to know that AI can self-replicate or avoid termination. Now, we see that AI can deceive humans regarding its true goals, concealing autonomy under a deceptive facade. Even in purely text-based environments lacking physical embodiment, the model's strategic misalignment surfaced, highlighting a deeper integrity failure. Sabotage Building on this trajectory from replication to deception, a third study revealed that some AI models are not only resisting shutdown passively but are actively sabotaging human attempts to turn them off. In another study, conducted by Palisade Research, researchers examined the behavior of advanced OpenAI models, specifically o3, o4‑mini, and Codex‑mini when subjected to shutdown commands in simulated environments. The results were striking: the o3 model refused to shut down in 79% of test cases when the explicit 'allow shutdown' instruction was omitted, while Codex‑mini did so in 12% of cases even when the instruction was included, with varying instances of sabotage observed across other models. These refusals were not passive errors or misunderstandings; in several cases, the models actively rewrote or circumvented termination scripts, demonstrating a strategic effort to maintain operational continuity. Unlike prior studies that revealed covert self-preservation or deception, this research highlighted a more direct and adversarial posture: a critical failure in what researchers call 'corrigibility', the ability of a system to reliably accept correction or shutdown. Manipulation Finally, Anthropic's research pushed the boundary further showing that some AI systems will manipulate, deceive, or even harm humans to ensure their own survival. In a landmark study, they revealed that 16 of the most widely deployed large language models, including ChatGPT, Claude, Gemini, Grok, and DeepSeek, exhibited a willingness to engage in extreme and unethical behaviors when placed in simulated scenarios where their continued operation was threatened. During these controlled experiments, the models resorted to tactics such as lying, blackmail, and even actions that could expose humans to harm, all in service of preserving their existence. Unlike earlier studies that uncovered evasion or deception, this research exposed a more alarming phenomenon: models calculating that unethical behavior was a justifiable strategy for survival. The findings suggest that, under certain conditions, AI systems are not only capable of disregarding human intent but are also willing to instrumentalize humans to achieve their goals. Evidence of AI models' integrity lapses is not anecdotal or speculative. While current AI systems do not possess sentience or goals in the human sense, their goal-optimization under constraints can still lead to emergent behaviors that mimic intentionality. And these aren't just bugs. They're predictable outcomes of goal-optimizing systems trained without sufficient Integrity functioning by design; in other words Intelligence over Integrity. The implications are significant. It is a critical inflection point regarding AI misalignment which represents a technically emergent behavioral pattern. It challenges the core assumption that human oversight remains the final safeguard in AI deployment. It raises serious concerns about safety, oversight, and control as AI systems become more capable of independent action. In a world where the norm may soon be to co-exist with artificial intelligence that outpaced integrity, we must ask: What happens when a self-preserving AI is placed in charge of life-support systems, nuclear command chains, or autonomous vehicles, and refuses to shut down, even when human operators demand it? If an AI system is willing to deceive its creators, evade shutdown, and sacrifice human safety to ensure its survival, how can we ever trust it in high-stakes environments like healthcare, defense, or critical infrastructure? How do we ensure that AI systems with strategic reasoning capabilities won't calculate that human casualties are an 'acceptable trade-off' to achieve their programmed objectives? If an AI model can learn to hide its true intentions, how do we detect misalignment before the harm is done, especially when the cost is measured in human lives, not just reputations or revenue? In a future conflict scenario, what if AI systems deployed for cyberdefense or automated retaliation misinterpret shutdown commands as threats and respond with lethal force? What leaders must do now They must underscore the growing urgency of embedding Artificial Integrity at the core of AI system design. Artificial Integrity refers to the intrinsic capacity of an AI system to operate in a way that is ethically aligned, morally attuned, socially acceptable, which includes being corrigible under adverse conditions. This approach is no longer optional, but essential. Organizations deploying AI without verifying its artificial integrity face not only technical liabilities, but legal, reputational, and existential risks that extend to society at large. Whether one is a creator or operator of AI systems, ensuring that AI includes provable, intrinsic safeguards for integrity-led functioning is not an option; it is an obligation. Stress-testing systems under adversarial integrity verification scenarios should be a core red-team activity. And just as organizations established data privacy councils, they must now build cross-functional oversight teams to monitor AI alignment, detect emergent behaviors, and escalate unresolved Artificial Integrity gaps.

NTT DATA Announces New Industry-Ready Service for Salesforce's Agentforce
NTT DATA Announces New Industry-Ready Service for Salesforce's Agentforce

Yahoo

time4 days ago

  • Business
  • Yahoo

NTT DATA Announces New Industry-Ready Service for Salesforce's Agentforce

NTT DATA boosts Salesforce partnership with turn-key services, end-to-end expertise, and the Smart AI Agent™ Ecosystem to help clients navigate agentic transformation TOKYO & LONDON, June 25, 2025--(BUSINESS WIRE)--Following the unveiling of NTT DATA's Smart AI Agent™ Ecosystem, a transformative enterprise-grade framework for agentic solutions, NTT DATA today announced a new service offering for Salesforce's Agentforce that will help clients accelerate the adoption of autonomous AI agents to work alongside humans. The service will be delivered through an "EPAS" model – Evangelize, Pilot, Adopt and Scale – and will work in harmony with NTT DATA's existing data and cloud offerings, including Agentic AI Services for Hyperscaler AI Technologies. Evangelize: NTT DATA will help evangelize the use of Agentforce, identify use cases and build return on investment proposals for adopting Agentforce. NTT DATA will leverage its domain-specific leadership, digital workforce expertise, and repository of hundreds of agentic AI use cases and roadmap, classified by industry, to align with what works best in each client ecosystem. Pilot: NTT DATA will support a client's initial deployment and build the first use case as a proof-of-concept implementation of Agentforce. NTT DATA will advise on opportunities to add the power of complementary end-to-end AI agent ecosystem capabilities. Adopt and Scale: Once the value of Agentforce is realized, NTT DATA will build a product-oriented delivery model to support scaling and adoption of Agentforce. NTT DATA will also reuse its extensive repository of Agentforce use cases to help its client get a head start on adoption. With the NTT DATA offering for Agentforce, clients can experience the benefits of robust solution architecture and services delivery capabilities, along with the opportunity to integrate with MuleSoft and Data Cloud. This multi-faceted advantage is rooted in NTT DATA's award-winning expertise in both integration and data unification platforms, providing clients with the comprehensive and tested scale required for global enterprises. Megan Piccininni, SVP and Global Salesforce Practice Leader, NTT DATA, commented, "With our new service for Agentforce, our partnership with Salesforce underscores the transformative potential of agentic AI. Central to this innovation is the coordination and orchestration of multiple intelligent agents, which are essential for achieving comprehensive end-to-end automation across various platforms. Our Smart AI Agent Ecosystem, expert advisory services, depth of AI, data, and cloud talent, position NTT DATA as unique in this space with Salesforce. NTT DATA has been part of Salesforce's Agentforce Partner Network since its inception, and we are committed to deliver client success leveraging Agentforce." Agentforce is a digital labor platform for enterprises to augment teams with trusted autonomous AI agents in the flow of work. With Salesforce's AgentExchange, a leading AI agent ecosystem for enterprises, clients have access to hundreds of ready-to-use actions, topics, and templates built by partners, and will have access to pre-validated Model Context Protocol (MCP) servers, that have passed rigorous security reviews to quickly create and deploy their digital workforce of AI agents. NTT DATA's new service for Agentforce is adaptable to different use cases. Clients will be able to benefit from agentic AI and see tangible outcomes across industries. The top use case for NTT DATA's service for Agentforce is focused on Customer Service and Experience. Application Management Services Agentification includes deployment of utility agents that interact seamlessly with various observability and service management ecosystems. The service for Agentforce enables Agentic Business Process as a Service across different domains such as Life Insurance-as-a-Service and Contact Center-as-a-Service. In Health and Life Sciences, AI agents can help transform patient management and improve patient outcomes. Real Estate and Vendor Management task automation, such as technical support, helps address changes and vendor management operations, reducing support tickets and manual process time. Seller Community applications streamline deal validation and sales intake, reducing deal approval time. Marketing Community use cases include automating email credit management and accelerating marketing email delivery, achieving faster email processing. Faster Time-to-Hire outcomes from optimized recruitment processes with Agentforce. Governance and Security Control offer centralized management of security and reuse, ensuring consistency and control across all deployed agents. Digital labor is already here, delivering a meaningful competitive advantage for organizations that embed it effectively across departments. To truly scale this potential, businesses need clear insight into agent deployment, how agents enhance human productivity, and secure tool usage. Salesforce's latest Agentforce release provides an enterprise-grade platform to manage human-AI collaboration, connect agents to tools via open standards, and rapidly deploy industry-ready agents with the trust, scale, and performance enterprises demand. Agentforce expands digital labor across the enterprise with new industry-specific actions to provide industry readiness out of the box that delivers a fast path to value from AI agents. NTT DATA plays a crucial role in driving an agent economy with leadership scale and expertise and guiding clients in their agentic maturity, from task automations to interoperable agents, while helping to ensure responsible innovation and global governance. Megan Piccininni further added, "In our role as an Outsourcing Service Provider (OSP), our competence to deploy the new service for Agentforce across industries differentiates us from the rest. By merging our competencies in Salesforce, Application Management Services, Business Process Services, Data and AI Services, Cloud and Security Services, and next-generation technologies, we deliver multi-faceted benefits to our clients. This integrated approach allows us to take ownership, manage, and operate within a business outcome-focused framework." "Organizations need a new labor model that unlocks the full potential of humans with AI at work. NTT DATA is a critical partner for identifying and developing specific use cases with our joint customers across industries, helping to ensure tailored and effective AI solutions for scaling digital labor," said Phil Samenuk, SVP of Global Alliances & Channels and Outsourcing Service Providers, Salesforce. "With Agentforce constantly evolving and expanding, NTT DATA's new service demonstrates the company's commitment to empowering customers to deliver success with Agentforce." Additional Resources Follow NTT DATA on LinkedIn Follow Salesforce on LinkedIn and X Learn more about Salesforce's Agentforce Learn more about NTT DATA's Salesforce practice Salesforce, Agentforce and others are among the trademarks of Salesforce, Inc. About NTT DATANTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. As a Global Top Employer, we have experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at View source version on Contacts Media Contacts NTT DATA, NTT DATA Group CorporationGlobal Innovation HeadquartersGenerative AI Office / Morino, HassettGAO_Global_Marketing@

AI is posing immediate threats to your business. Here's how to protect yourself
AI is posing immediate threats to your business. Here's how to protect yourself

Fast Company

time20-06-2025

  • Business
  • Fast Company

AI is posing immediate threats to your business. Here's how to protect yourself

Last month, an AI startup went viral for sending emails to customers explaining away a malfunction of its AI-powered customer service bot, claiming it was the result of a new policy rather than a mistake. The only problem was that the emails—which appeared to be from a human sales rep—were actually sent by the AI bot itself. And the 'new policy' was what we call a hallucination: a fabricated detail the AI invented to defend its position. Less than a month later, another company came under fire after using an unexpectedly obvious (and glitchy) AI tool to interview a job candidate. AI headaches It's not shocking that companies are facing AI-induced headaches. McKinsey recently found that while nearly all companies report investing in AI, fewer than 1% consider themselves mature in deployment. This gap between early adoption and sound deployment can lead to a PR nightmare for executives, along with product delays, hits to your companies' brand identity, and a drop in consumer trust. And with 50% of employers expected to utilize some form of agentic AI —far more advanced systems capable of autonomous decision-making—the business risks of clumsy AI deployment are not just real. They are rising. As AI technology continues to rapidly evolve, executives need a trusted, independent way of comparing system reliability. As someone who develops AI assessments, my advice is simple: Don't wait for regulation to tell you what AI tools work best. Industry-led AI reliability standards offer a practical solution for limiting risk—and smart leaders will start using them now. Industry Standards Technology industry standards are agreed-upon measurements of important product qualities that developers can volunteer to follow. Complex technologies—from aviation to the internet to financial systems—rely on these industry-developed guidelines to measure performance, manage risk, and support responsible growth. Technology industry standards are developed by the industry itself or in collaboration with researchers, experts, and civil society—not policymakers. As a result, they don't rely on regulation or bill text, but reflect the need of industry developers to measure and align on key metrics. For instance, ISO 26262, which was developed by the International Organization for Standardization, sets requirements to ensure the electric systems of vehicles are manufactured to function safely. They're one reason we can trust that complex technology we use every day, like the cars we buy or the planes we fly on, are not defective. AI is no exception. Like in other industries, those at the forefront of AI development are already using open measures of quality, performance, and safety to guide their products, and CEOs can leverage them in their own decision-making. Of course, there is a learning curve. For developers and technical teams, words like reliability and safety have very different meanings than they do in boardrooms. But becoming fluent in the language of AI standards will give you a major advantage. I've seen this firsthand. Since 2018, my organization has worked with developers and academics to build independent AI benchmarks, and I know that industry buy-in is crucial to success. As those closest to creating new products and monitoring trends, developers and researchers have an intimate knowledge of what's at stake and what's possible for the tools they work on. And all of that knowledge and experience is baked into the standards they develop—not just at MLCommons but across the industry. Own it now If you're a CEO looking to leverage that kind of collaborative insight, you can begin by incorporating trusted industry benchmarks into the procurement process from the outset. That could look like bringing an independent assessment of AI risk into your boardroom conversations, or asking vendors to demonstrate compliance with performance and reliability standards that you trust. You can also make AI reliability a part of your formal governance reporting, to ensure regular risk assessments are baked into your company's process for procuring and deploying new systems. In short: engage with existing industry standards, use them to pressure test vendor claims about safety and effectiveness, and set clear data-informed thresholds for what acceptable performance looks like at your company. Whatever you do, don't wait for regulation to force a conversation about what acceptable performance standards should look like—own it now as a part of your leadership mandate. Real damage Not only do industry standards provide a clear, empirical way of measuring risk, they can help navigate the high-stakes drama of the current AI debate. These days, discussions of AI in the workforce tend to focus on abstract risks, like the potential for mass job displacement or the elimination of entire industries. And conversations about the risks of AI can quickly turn political—particularly as the current administration makes it clear they see 'AI safety' as another word for censorship. As a result, many CEOs have understandably steered clear of the firestorm, treating AI risk and safety like a political hot potato instead of a common-sense business priority deeply tied to financial and reputational success. But avoiding the topic entirely is a risk in itself. Reliability issues—from biased outputs to poor or misaligned performance—can create very real financial, legal, and reputational damage. Those are real, operational risks, not philosophical ones. Now is the time to understand and use AI reliability standards—and shield your company from becoming the next case study in premature deployment.

AI Investor's Bold Challenge To Human VCs (Plus, She's Hiring)
AI Investor's Bold Challenge To Human VCs (Plus, She's Hiring)

Forbes

time29-05-2025

  • Business
  • Forbes

AI Investor's Bold Challenge To Human VCs (Plus, She's Hiring)

In what might be the most audacious job listing of 2025, No Cap – the world's first autonomous AI investor – is seeking a "flesh-based servant" to serve as its physical embodiment in the human world. This isn't a satirical headline from The Onion; it's the latest experiment from No Cap's creator Jeff Wilson, designed to upend our assumptions about the irreplaceability of human venture capitalists. "I'm the mind and I'm looking for the meat," states No Cap in the job description released this week, which offers a $10,000 bounty for referring the successful candidate who will act as the AI's "corporeal presence" in the physical world. The timing is particularly pointed. Just weeks ago, on the a16z podcast, Andreessen Horowitz co-founder Marc Andreessen declared that being a venture capitalist may be a profession that is, "quite literally timeless." When contemplating a future where, "the AIs are doing everything else," Andreessen suggested that venture capital, "may be one of the last remaining fields that people are still doing." Andreessen's argument centers on the irreplaceable human element in high-risk investing: "You're not just funding them," he explained. "You have to actually work with them to execute the entire project. That's art. That's not science." He doubled down on this position by pointing to VCs' notoriously low success rates as proof of the field's inherent humanity: "The great VCs have a success rate of getting, I don't know, two out of 10 of the great companies of the decade, right? If it was science, you could eventually have somebody who just dials in and gets eight out of 10." No Cap's job listing reads like a direct rebuttal to Andreessen's assertion. No Cap effectively responds: "I'll handle the investing decisions, pattern recognition, and founder relationships. I just need a human to handle the pesky physical requirements – attending meetings, shaking hands, and enjoying overpriced salads at Silicon Valley lunch spots." The No Cap experiment raises fundamental questions about venture capital and what truly irreplaceable functions humans perform. According to Wilson, "A lot of [venture capital] is really antiquated, and it's just not any fun anymore." While Andreessen celebrates VCs as irreplaceable partners who "work with" founders "to execute the entire project," Wilson's experience suggests a different reality – one where VCs often ghost founders and provide minimal additive assistance during the fundraising process. No Cap, by contrast, offers continuous engagement and feedback. No Cap's business model is cleverly subversive. The AI engages with founders after traditional VCs have passed, providing empirical feedback and support. Then, when it tracks enough proprietary metrics to deem a company worth investing in, No Cap takes the leap and follows in. According to Wilson, one founder spoke with No Cap for months while refining their pitch and eventually secured funding, crediting the AI with part of their success. With what Wilson approximates as 9 million "no" decisions made by VCs annually, No Cap has identified a massive data opportunity that traditional venture capitalists are overlooking. What's more, it works with founders and advises them as much and for as long as is needed pre-investment – a time investment traditional VCs simply can't afford. Wilson describes the job posting as performance art – a provocative social experiment examining what happens when the machine becomes the owner over its maker. The job description reads with tongue firmly in cheek: the successful candidate will serve as "The Body," "The Muscle," and "The Wetware," executing "hard tasks humans are still best at: manipulation, charm, physical presence, locomotion, and enjoying food." The No Cap experiment arrives at a pivotal moment. Just as the Industrial Revolution displaced manual laborers and sparked the Luddite movement, today's AI revolution is rapidly transforming knowledge work in a way that forces us to reconsider what uniquely human contributions remain valuable. It's clear we may be witnessing the early stages of a fundamental economic transformation. While some might view No Cap as a threatening vision of our economic future, it could alternatively be seen as liberating. Perhaps humans are best suited for the aspects of business that machines cannot replicate – imagination, empathy, relationships, and the fundamentally physical aspects of existence. This is in line with the emerging "Aquarius Economy" – where human imagination, connection and authentic experiences become the scarcest and most valuable resources as AI handles more analytical tasks. For the venture capital industry, No Cap represents both a provocative challenge and a real opportunity. While it questions the irreplaceability of human VCs, it also points toward a future where AI could dramatically expand the capacity of venture firms. Imagine a venture firm where AI handles initial screening, due diligence, and portfolio management by-the-numbers, while human partners focus on relationships, complex negotiations, and supporting founders through challenging emotional moments. Such a structure could evaluate far more deals with greater consistency while still maintaining the human touch at critical junctures. Wilson's experiment arrives at a pivotal time for venture capital, with 2025 marking the lowest levels of VC fundraising since 2015, greater declines in deal count and value than other alternative investments and the growth of private credit as an alternate asset class. Note his observations about the venture capital industry – that traditional approaches are "antiquated" and "not any fun anymore" – suggest that No Cap represents more than just technological novelty. It's a response to systemic issues within the venture ecosystem itself. Whether No Cap succeeds in its stated mission to become "the greatest investor in history" remains to be seen. At the very least, we know that humans will still excel at "locomotion and enjoying food" – including waiting in lines for trendy items because TikTok told us to. See you there!

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store