logo
#

Latest news with #Ivanti

Is AI Making Us All Liars? Maintaining Truth In An Era Of Efficiency
Is AI Making Us All Liars? Maintaining Truth In An Era Of Efficiency

Forbes

time08-07-2025

  • Business
  • Forbes

Is AI Making Us All Liars? Maintaining Truth In An Era Of Efficiency

Sal Viveros, Head of Global Corp Communications at Ivanti. There has always been tension between speed and accuracy. It's at the root of more than a few problems, and it's not hard to point to a culprit exponentially increasing that tension: AI. I'm not an AI hater. Far from it. But I'm also hyper-aware of AI's role in creating and magnifying issues. I can speak specifically to the PR and corporate communications space, but it goes beyond comms—and the impact needs to be acknowledged and addressed sooner rather than later. A January 2025 report from Muck Rack shows that AI adoption in PR has nearly tripled since 2023, but only 38% of PR pros report having company guidelines for AI use. Although that's an increase from last year, it's still nowhere near where it needs to be, given the ethical implications of unregulated AI usage. I'm the first to acknowledge AI's productivity benefits. From drafting content to analyzing media coverage, these tools transform our daily workflows. But at Ivanti, where we evaluate the impact of technology across industries, we recognize that efficiency without verification creates new vectors for unintentional misinformation. And that's putting it nicely. When Efficiency Supersedes Accuracy AI is pretty irresistible in the communications space. It can create outlines in seconds, analyze interviews in minutes and scale campaign personalization effortlessly. But what's the flip side of those benefits? Leveraging unregulated AI creates conditions where misinformation can easily enter official communications channels. Unlike human writers, AI models process information based on statistical patterns without distinguishing between fact and fiction. These systems produce authoritative-sounding content regardless of accuracy—a particularly dangerous dynamic in communications where credibility is essential. And that's still a relatively innocuous downside compared to the privacy concerns inherent in prompting AI with potentially sensitive company details or PII (personally identifiable information). The Ethical Tangles Of AI Fact-checking and sensitive info exposure isn't even the whole concern here. Other ethical tangles include questions like: • Should communications materials disclose when AI contributed substantially to their creation? • What accountability do PR professionals bear for AI-generated content they publish? • How do we prevent algorithmic bias from perpetuating harmful stereotypes in communications? The Public Relations Society of America's (PRSA) guidance on AI ethics addresses these concerns directly, questioning whether audiences should know when they're interacting with AI rather than humans. These considerations become increasingly important as AI capabilities advance. For what it's worth, Ivanti has taken a proactive step by creating its own generative AI platform for internal use. This platform allows employees to leverage AI to foster creativity and efficiency while maintaining ethical guardrails. By building a bespoke AI tool, we support innovation across the organization while ensuring that AI is used in a way that aligns with company standards and values. Creating Governance That Works At Ivanti, we emphasize the need for comprehensive AI implementation plans that include training, documentation and ethics guidelines. Without these frameworks, organizations risk increased tech complexity rather than simplified workflows. That's definitely not the aim. What do these plans look like in practice? For comms professionals seeking practical governance approaches, here are a few research-backed ideas: • Document clear AI boundaries. Specify which communications require complete human authorship versus AI assistance. • Implement fact verification protocols. Establish procedures for reviewing AI-generated statistics and references. • Develop transparency policies. Create guidelines on when to disclose AI involvement in content creation. • Diversify AI tools. Recognize that each system has different limitations and cross-check results. • Prioritize team training. IBM emphasizes that AI literacy forms the foundation for addressing more complex issues like bias and privacy. What's The Real Problem? With this intentionally provocative title, I asked if AI is making all of us liars. The truth is that AI does seem to contribute to obfuscation. But I don't think AI itself is the problem. Instead, it's a failure to acknowledge that companies must take decisions about AI disclosures seriously and operate clearly and consistently. In short, the question isn't whether to use AI, but how to implement and manage it responsibly. Forbes Communications Council is an invitation-only community for executives in successful public relations, media strategy, creative and advertising agencies. Do I qualify?

Why CEOs Really Do Need To Be Customer Zero
Why CEOs Really Do Need To Be Customer Zero

Forbes

time03-07-2025

  • Business
  • Forbes

Why CEOs Really Do Need To Be Customer Zero

Dennis Kozak is the Chief Executive Officer at Ivanti, responsible for the company's overall strategic direction and growth. A colleague once brilliantly suggested staying in your own guestroom for a night to see what your guests really experience. After all, a nice mattress gets overshadowed quickly if car headlights keep waking you up—and you wouldn't know that if you didn't sleep there. Being Customer Zero is the equivalent of sleeping in your guest room every night. My first week as CEO, I didn't need to get briefed on our products because I lived in them. I insisted our IT team set me up with the same experience our customers have—not a special executive version, not a sanitized demo, but the real thing. That decision revealed more about our business than a hundred PowerPoint presentations ever could. But too many tech leaders remain disconnected from the day-to-day reality of using their own solutions. They see polished demos and curated metrics but miss the friction points that frustrate actual users. CEOs as Customer Zero is not a marketing stunt. It's not a charming talking point. It's a necessity for effective leadership and operations. Establishing A Real Ownership Mentality Throughout my career, I've distinguished between what I call owner mentality versus renter mentality. Renters make decisions based on short-term convenience. Owners invest in understanding every aspect of their property because they're committed to its long-term value. Customer Zero cultivates this ownership mentality throughout the organization. When your marketing team struggles with the same UX issues your customers face, those "minor bugs" suddenly become urgent priorities. When your sales team relies on your security solutions to protect sensitive deals, product promises transform into personal commitments. What Being Customer Zero Looks Like In Practice At my company, we put this approach to the test during extraordinary circumstances. When we rapidly grew to 3,200 employees through several strategic acquisitions, we faced exactly the kind of challenges our customers deal with: We remotely managed and provisioned around 3,000 devices globally while deprovisioning approximately 2,000 devices—all during peak pandemic disruption. Our team generated over 22,000 tickets on our platform, with automatic resolution and self-help functionality reclaiming substantial bandwidth for our IT support team. We implemented our own DevSecOps processes, scanning our code for vulnerabilities and prioritizing critical security issues—the same workflow we recommend to customers. The results weren't always comfortable, but they were invaluable. Our teams delivered unfiltered, candid feedback about functionality and user experience. We made changes accordingly, often discovering issues no focus group would have uncovered. How Being Customer Zero Drives Transformation Being Customer Zero drives three critical transformations: Like many of our customers, our company has on-premises products moving to the cloud. By experiencing this migration firsthand, we get immediate feedback on gaps between these environments. When you acquire different solutions with varying technology stacks, integration becomes critical. Our Customer Zero program evaluates these integrations through day-to-day use, testing both single-pane-of-glass management and API functionality. Nothing builds credibility like saying, "We rely on this so heavily that our business would collapse without it." Customer Zero creates authentic conviction in both sales teams and customers. How To Become Customer Zero For Your Own Company If you're considering implementing your own Customer Zero initiative, start with these practical approaches: • Champion universal adoption at the executive level. • Create formal feedback channels between internal users and development. • Measure and track internal usage metrics as seriously as customer metrics. • Document both successes and pain points for transparent customer conversations. • Prioritize internal user experience issues in your development backlogs. The most crucial element? Commitment to authenticity. If your team discovers limitations, fix them before expecting customers to adapt around them. Checking Your Ego At The Door Let's be honest: Becoming Customer Zero can be humbling. Maybe really humbling. You'll discover rough edges in your products. You'll experience frustrations your customers have silently endured. You might even question past decisions about product priorities. That discomfort is exactly the point. It forces your organization to confront reality rather than marketing aspirations. To make it work, you have to check your ego at the door. This approach has transformed how we innovate. Our teams now operate at the leading edge—managing complex IT data while leveraging AI and automation capabilities because our own business depends on them working flawlessly. Every executive should regularly ask: Would I bet my business on my own product today? If the answer makes you hem and haw even a little bit, you've identified your most pressing priority. The greatest gift you can give customers isn't another feature—it's the confidence that comes from knowing you trust your solutions enough to build your own success upon them. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

How AI is becoming a secret weapon for workers
How AI is becoming a secret weapon for workers

Free Malaysia Today

time23-06-2025

  • Business
  • Free Malaysia Today

How AI is becoming a secret weapon for workers

Companies should completely rethink their integration of AI, rather than turning a blind eye to those employees that use the technology covertly. (Envato Elements pic) PARIS : Artificial intelligence is fast becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this tech and integrate it into their processes. But behind the official rhetoric, a very different reality is emerging: many employees are adopting these tools discreetly, out of sight of their managers. A recent survey conducted by software company Ivanti shows the extent of this under-the-radar adoption of AI, revealing one-third of employees surveyed use AI tools without their supervisors' knowledge. There are several distinct reasons for this covert strategy. For 36% of them, it is primarily a matter of gaining a 'secret advantage' over their colleagues, while 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer. The figures reveal an explosion in clandestine use: 42% of office workers say they use generative AI tools such as ChatGPT at work. Among IT professionals, this proportion reaches an impressive 74%. And close to half of office workers use AI tools not provided by their company. Underestimating the risks This covert use exposes organisations to considerable risks: unauthorised platforms do not always comply with security standards or corporate data-protection requirements. From confidential data and business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked. 'It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' stressed Brooke Johnson, chief legal counsel at Ivanti. Employers should encourage open dialogue to foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. (Envato Elements pic) The survey also reveals a troubling paradox: while 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fuelled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned. This situation highlights a huge gap between management and employees: although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation. In the face of this silent revolution, Johnson advocates a proactive approach: 'Organisations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications.' This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimisation: the most successful organisations will need to balance technological use with the enhancement of human potential. By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment.

AI is becoming a secret weapon for workers
AI is becoming a secret weapon for workers

The Star

time03-06-2025

  • Business
  • The Star

AI is becoming a secret weapon for workers

42% of office workers say they use generative AI tools (like ChatGPT) at work. — AFP Relaxnews Artificial intelligence is gradually becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this revolutionary technology and integrate it into their processes. But behind the official rhetoric, a very different reality is emerging. Many employees have chosen to take the initiative, adopting these tools discreetly, out of sight of their managers. A recent survey,* conducted by software company Ivanti, reveals the extent of this under-the-radar adoption of AI. One-third of employees surveyed use AI tools without their managers' knowledge. There are several distinct reasons for this covert strategy. For 36% of them, it is primarily a matter of gaining a "secret advantage' over their colleagues. Meanwhile, 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This fear is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer. The figures reveal an explosion in clandestine use. Forty-two percent of office workers say they use generative AI tools such as ChatGPT at work (+16 points in one year). Among IT professionals, this proportion reaches an impressive 74% (+8 points). Now, nearly half of office workers use AI tools not provided by their company. Underestimating the risks This covert use exposes organizations to considerable risks. Indeed, unauthorized platforms do not always comply with security standards or corporate data protection requirements. From confidential data to business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked. "It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' emphasizes Brooke Johnson, Chief Legal Counsel at Ivanti. The survey also reveals a troubling paradox. While 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fueled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned. This situation highlights a huge gap between management and employees. Although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation. In the face of this silent revolution, Brooke Johnson advocates a proactive approach: "To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications." This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimization: the most successful organizations will need to balance technological use with the enhancement of human potential. By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment. – AFP Relaxnews *This survey was conducted by Ivanti in February 2025 among more than 6,000 office workers and 1,200 IT and cybersecurity professionals.

Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers
Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers

Yahoo

time29-05-2025

  • Business
  • Yahoo

Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers

Companies across the U.S. are struggling to figure out ways to help their employees supercharge their productivity using AI. But some employees who are already using the technology are trying to keep it hidden from their bosses. Nearly one-third of workers keep their AI use a secret from their employer, according to new data from Ivanti, an IT software company. The biggest reason workers choose not to disclose using the tech tool is because they want a 'secret advantage' over their peers (36%), according to the report. Employees also fear that revealing their reliance on this technology will lead to losing their job (30%). And there's also the fact that their workplaces do not have clear cut policies on AI usage. 'Employees are using AI tools without their bosses' knowledge to boost productivity. It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' Brooke Johnson, chief legal counsel and senior vice president of security and human resources at Ivanti, writes in the report. Instead of turning a blind eye to a legion of secret AI users, the report suggests that companies rethink the ways they integrate AI and automation into their workforce, emphasizing the need for a clear, comprehensive plan that explains how certain tools will support specific roles and objectives. While 44% of professionals surveyed say their companies have invested in AI, they also report lacking the adequate skills and training to use the technology effectively. That could become an even more pressing issue in the future, considering the pitfalls that AI could present when it comes to things like cybersecurity, company contract violations, or IP, according to the report.'To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications,' writes Johnson. 'By fostering an open dialogue, employers can encourage transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store