
Future-Proofing Leaders & CX: Soft Skills In An AI World
The studies went on to say that organisations need not only to develop this capability but also to get several other things right if they are to harness the potential of generative AI. These include leadership alignment, enterprise strategy, data cleanliness and availability, the need for a modern technological infrastructure, the right internal skills and capabilities and the ability to manage large-scale change.
The Accenture report went on to add that, in their view, talent development and new ways of working stood out as the imperative that offered the most potential to be the greatest differentiator of all of the imperatives they uncovered. However, the report noted, it was also the one that was the least developed in the organizations they surveyed.
Disappointingly, the report stopped short of outlining the skills leaders and their teams will need to develop if they and their organizations are to thrive in this new AI-powered era.
Soft skills
However, a recent report by Skiilify, a research-based learning experience provider, sheds some light on what those new skills and capabilities might be.
Their study was designed to identify the soft skills that leaders need to develop in order to thrive in an industry that is constantly evolving, the value they place on these skills, and the gaps between the perceived value of these soft skills and their actual development.
Here are the main headlines of the study:
While the survey focused on capturing the perspectives of tech leaders and the challenges they face, when I discussed the results recently with Dr. Paula Caligiuri, Co-Founder of Skiilify and a D'Amore-McKim School of Business Distinguished Professor at Northeastern University, she believed that the findings about skills deficiencies are directly translatable to all leaders.
She also noted that two other things really stood out to her from the findings.
The first was how each of these competencies was considered extremely important for the future. But, given where leaders are currently at, acquiring these new skills and competencies will require a 'big behavioural shift', Caligiuri suggests.
Secondly, Caligiuri highlighted that most respondents felt they had 3-6 years to develop these competencies. This is partly aligned with Accenture's view that 'The rapid pace of technological change has reduced the half-life of skills to less than five years.'
However, Caligiuri disagrees and warns that 'super-employees', those with deep technical skills and knowledge, as well as a mature set of developed soft skills, are in high demand right now, and that demand is only going to grow.
As a result, Caligiuri suggests that leaders should start developing these skills now, as they will take time to develop. However, she also warns that the road ahead is likely to be 'tough' and that leaders will likely face 'some bumps and bruises along the way, but that leaders should stick with it', as these skills are likely to become increasingly important in the coming years.
This is sound advice.
But, one of the most telling findings for me emerging from the research was the insight that leaders often lacked the time to develop new skills.
This is a real challenge.
Not just for leaders but for their teams too.
They not only have to create the space and time for themselves to experiment, fail, and learn, but they must also create an environment and culture that allows their team members to do the same.
In a world where the pace of technological change appears to be constantly increasing, this, for some, will feel like an impossible task.
However, that is the challenge emerging from this research.
The truth is that if we want to achieve the better customer, employee, and business outcomes that we are all striving for, then leaders and their teams must carve out time and space to learn and try new things. This is essential if they are to give themselves a fighting chance of providing a superior experience to the customers they serve.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
6 hours ago
- Forbes
Google Photos Introduces New AI Tools: Fun, Free, And Very Limited
Google is adding new generative AI tools to Google Photos, shifting the app away from its original ... More purpose. Key Takeaways Google Photos could be at the start of a radical transformation. In a major update rolling out now, Google is introducing what could be the most significant Google Photos AI upgrade yet, allowing you to turn static images into animated video clips and stylized art with just a few taps. The tools are free and fun, but are deliberately and severely limited -- and in many ways, that's a good thing. Google's Remix feature turns still images into fun videos with AI. The Big Update: Photo To Video — Fun But Deliberately Nerfed As I previously reported, Google Photos is introducing a game-changing new feature that transforms still photos into short video clips with a single tap. It's a powerful, but significantly cut-down version of the photo-to-video features already available to paying Google AI Pro and Ultra subscribers in Gemini. You can select any picture in your Google Photos library, choose between the vague 'Subtle movement' or slot-machine-like 'I'm feeling lucky' options, and wait for about a minute for the video animation to generate. FEATURED | Frase ByForbes™ Unscramble The Anagram To Reveal The Phrase Pinpoint By Linkedin Guess The Category Queens By Linkedin Crown Each Region Crossclimb By Linkedin Unlock A Trivia Ladder Google's demos show once static people now celebrating by throwing handfuls of confetti in the air before it tumbles back down from above. These were both generated in 'I'm feeling lucky,' mode. I presume additional video effects will be available at launch and more added in the future. If you don't like the results, you can hit the Regenerate button to try again, but that's about it for user control. You can also tap on thumbs-up or thumbs-down icons to send feedback to Google. It would be great to see a few more preset options available, beyond just subtle movements or a random effect. Even adding just a few more emotions would make these clips useful as fun reactions for messaging apps, etc, in place of emojis or pre-made GIFs. The process takes up to a minute to complete, and you The focus here is clearly on fun rather than unbridled creativity. Where Gemini utilizes Google's powerful Veo3 video AI model to create animations of anything you want, Google Photos employs the older Veo 2 model, offering very little user control over what happens in the animation, except for repeatedly hitting the 'Regenerate' button. Furthermore, Veo 2 cannot generate audio, one of the standout features of Veo 3. Remix Your Photos — Too Little, Too Late? First discovered in May of this year, the new 'Remix' feature allows you to select a photo and transform it into a range of artistic styles, including cartoons, pencil sketches, and paintings. Google Photos Remix feature lets you transform photos into a range of artistic styles. As with the Photo to Video feature above, you can hit Regenerate to re-try any pictures you don't like and tap one of the thumb icons to provide feedback. Remix is clearly aimed at having fun and sharing moments in new ways, and there's nothing wrong with that. The results are Google's answer to the viral 'Ghliblified' images and action figure pictures you've probably seen taking over social media. However, unlike powerful tools like ChatGPT or Midjourney where you can simply type in any style imaginable, Remix forces you to pick from a small menu of pre-selected styles. The approach helps keep generated output safe for consumption, but also prevents any real creativity. Google will need to update the library of styles frequently or the novelty will wear off quickly. A New Direction For Google Photos — The Create Tab To make Google Photos' new generative tools easier to find, Google is introducing a new 'Create' tab, accessible by clicking an icon at the bottom of the app on both Android and iOS. Here, you'll be able to find all of Google Photos' creative tools gathered in one place, effectively separating the newer creative side of Google Photos from its original library functions. Google Photos introduces a new "Create" tab to house all of its new generative AI tools. This marks the beginning of a significant shift in purpose for Google Photos, as Google notes, it's now 'more than an archive, it's a canvas.' Personally, that's not what I want from Google Photos; I use it as a place to store and revisit memories rather than as a tool to create new content. The app's existing Animated slide shows and collages use AI to enhance memories, but these new tools alter them into something entirely new, creating video clips of events that never really happened. Google Photos Now Creates, But Is It Safe? Google appears to be exercising considerable caution with these new features, not least by severely limiting the scope of what can be created with these new Google Photos tools. However, the company acknowledges that the results may be 'inaccurate or unexpected' and displays a warning before use, along with a link to its GenAI prohibited use policy. Furthermore, all images and videos generated by Google Photos using AI contain invisible SynthID watermarks that reveal their synthetic origins. The Big Issue: US-Only Rollout Alienates Global Users Photo to Video and Remix are now rolling out on Android and iOS, but are currently only available in the US. The Create tab will then roll out in August, but once again, only in the US. This will be disappointing for international users, who may have to wait a considerable amount of time to access the new features. Remember, Google Photos users outside the US are still waiting for access to the AI-powered 'Ask Photos' feature nine months after launch. Google Photos has a massive worldwide user base, with billions of photos and videos uploaded each week, and runs the risk of frustrating a colossal number of customers if non-US customers remain excluded from its best features. Follow @paul_monckton on Instagram.


USA Today
11 hours ago
- USA Today
MCP Connects, SDP Delivers: The Missing Half of AI Memory is Here
Prescott, Arizona / Syndication Cloud / July 22, 2025 / David Bynon Key Takeaways Model Context Protocol (MCP) creates AI connections to external tools but doesn't define structured memory content Semantic Digest Protocol (SDP) provides trust-scored, fragment-level memory objects for reliable AI operations Multi-agent systems typically fail due to missing shared, verifiable context rather than communication issues MCP and SDP together form a complete memory architecture that stops hallucinations and contextual drift MedicareWire will implement SDP in 2025 as the first major deployment of AI-readable, trust-verified memory in a regulated domain AI's Memory Crisis: Why Today's Systems Can't Remember What Matters Today's AI systems face a critical problem: they process vast information but struggle with reliable memory. This isn't merely a technical issue — it's what causes hallucinations, inconsistency, and unreliability in advanced AI deployments. This problem becomes obvious in multi-agent systems. When specialized AI agents work together, they don't typically fail from poor communication. They fail because they lack shared, scoped, and verifiable context. Without standardized memory architecture, agents lose alignment, reference inconsistent information, and produce unreliable results. David Bynon, founder at MedicareWire, identified this issue early on. In regulated areas like Medicare, incorrect information can seriously impact consumers making healthcare decisions. The solution needs two protocols working together to create a complete memory system for AI. The first protocol, Model Context Protocol (MCP), addresses the connection problem. But it's just half of what's needed for truly reliable AI memory. Understanding Model Context Protocol (MCP) IBM recently recognized the Model Context Protocol (MCP) as core infrastructure for AI systems, describing it as 'USB-C for AI' — a universal connector standard allowing AI models to connect with external tools, data sources, and memory systems. This recognition confirmed what many AI engineers already understood: standardized connections between AI models and external resources build reliable systems at scale. IBM's Recognition: The 'USB-C for AI' Breakthrough The USB-C comparison makes sense. Before USB standardization, connecting devices to computers required numerous proprietary ports and cables. Before MCP, every AI tool integration needed custom code, fragile connections, and ongoing maintenance. IBM's official support of MCP acknowledged that AI's future requires standardized interfaces. Just as USB-C connects any compatible device to any compatible port, MCP creates a standard protocol for AI systems to interact with external tools and data sources. What MCP Solves: The Transport Problem MCP handles the transport problem in AI systems. It standardizes how an AI agent: Negotiates with external systems about needed information Creates secure, reliable connections to tools and data sources Exchanges information in predictable, consistent formats Maintains state across interactions with various resources This standardization allows developers to build tools once for use with any MCP-compliant AI system. Custom integrations for each new model or tool become unnecessary — just consistent connectivity across platforms. The Critical Gap: Missing Content Definition Despite its value, MCP has a major limitation: it defines how AI systems connect, but not what the content should look like. This resembles standardizing a USB port without defining the data format flowing through it. This creates a significant gap in AI memory architecture. While MCP handles connections, it doesn't address: How to structure memory for machine understanding How to encode and verify trust and provenance How to scope and contextualize content How information fragments should relate to each other This explains why AI systems with excellent tool integration still struggle with reliable memory — they have connections but lack content structure for trustworthy recall. Semantic Digest Protocol: The Memory Layer MCP Needs This is where the Semantic Digest Protocol (SDP) fits — built to work with MCP while solving what it leaves unaddressed: defining what memory should actually look like. Trust-Scored Fragment-Level Memory Architecture SDP organizes memory at the fragment level, instead of treating entire documents as single information units. Each fragment — a fact, definition, statistic, or constraint — exists as an independent memory object with its own metadata. These memory objects contain: The actual information content A trust score based on source credibility Complete provenance data showing information origin Scope parameters showing where and when the information applies Contextual relationships to other memory fragments This detailed approach fixes a basic problem: AI systems must know not just what a fact is, but how much to trust it, where it came from, when it applies, and how it connects to other information. Using the 'USB-C for AI' analogy, SDP is a universal, USB-C thumb drive for the Model Context Protocol. It provides data, across multiple surfaces, in a format MCP recognizes and understands Machine-Ingestible Templates in Multiple Formats SDP creates a complete trust payload system with templates in multiple formats: JSON-LD for structured data interchange TTL (Turtle) for RDF graph representations Markdown for lightweight documentation HTML templates for web publication Invented by David Bynon as a solution for MedicareWire, the format flexibility makes SDP work immediately with existing systems while adding the necessary trust layer. For regulated sectors like healthcare, where MedicareWire operates, this trust layer changes AI interactions from educated guesses to verified responses. The Complete AI Memory Loop: MCP + SDP in Action When MCP and SDP work together, they form a complete memory architecture for AI systems. Here's the workflow: From User Query to Trust-Verified Response The process starts with a user query. Example: 'What's the Maximum Out-of-Pocket limit for this Medicare Advantage plan in Los Angeles?' The AI model uses MCP to negotiate context with external resources. It identifies what specific plan information it needs and establishes connections to retrieve that data. The external resource sends back an SDP-formatted response with the requested information. This includes the MOOP value, geographic scope (Los Angeles County), temporal validity (2025), and provenance (directly from CMS data), all with appropriate trust scores. With trust-verified information, the model answers accurately: 'The 2025 Maximum Out-of-Pocket limit for this plan in Los Angeles County is $4,200, according to CMS data.' No hallucination. No vague references. No outdated information. Just verified, scoped, trust-scored memory through standardized connections. Eliminating Hallucinations Through Verified Memory This method addresses what causes hallucinations in AI systems. Rather than relying on statistical patterns from training, the AI retrieves specific, verified information with full context about reliability and applicability. When information changes, there's no need to retrain the model. The external memory layer updates, and the AI immediately accesses new information—complete with trust scoring and provenance tracking. Real-World Implementation: MedicareWire 2025 This isn't theoretical — SDP launches on in August 2025, marking the first major implementation of AI-readable, trust-scored memory in a regulated domain. 1. First Large-Scale Deployment in a Regulated Domain The healthcare industry, especially Medicare, offers an ideal testing ground for trust-verified AI memory. Incorrect information has serious consequences, regulations are complex, and consumers need reliable guidance through a confusing system. MedicareWire's implementation will give AI systems unprecedented accuracy when accessing Medicare plan information. Instead of using potentially outdated training data, AI systems can query MedicareWire's SDP-enabled content for current, verified information about Medicare plans, benefits, and regulations. 2. Solving Healthcare's Critical Information Accuracy Problem Consumers using AI assistants for Medicare options will get consistent, accurate information regardless of which system they use. The SDP implementation ensures any AI agent can retrieve precise details about: Plan coverage specifications Geographic availability Cost structures and limitations Enrollment periods and deadlines Regulatory requirements and exceptions All come with proper attribution, scope, and trust scoring. 3. Creating the Foundation for Multi-Agent Trust Infrastructure Beyond immediate benefits for Medicare consumers, this implementation creates a blueprint for trust infrastructure in other regulated fields. Multi-agent systems will have shared, verifiable context — eliminating drift and hallucination problems that affect complex AI deployments. The combination of MCP's standardized connections and SDP's trust-verified memory builds the foundation for reliable AI systems that can safely operate in highly regulated environments. From Connection to Memory: The Future of Reliable AI Is Here David Bynon, founder of Trust Publishing and architect of SDP, states: 'We didn't just create a format. We created the trust language AI systems can finally understand — and remember.' As AI shapes important decisions in healthcare, finance, legal, and other critical fields, reliable, verifiable memory becomes essential. The MCP+SDP combination shifts from probabilistic guessing to trust-verified information retrieval — defining the next generation of AI applications. SDP will be available as an open protocol for non-directory systems, supporting broad adoption and continued development across the AI ecosystem. As the first major implementation, MedicareWire's deployment marks the beginning of a new phase in trustworthy artificial intelligence. MedicareWire is leading development of trustworthy AI memory systems that help consumers access accurate healthcare information when they need it most. David Bynon 101 W Goodwin St # 2487 Prescott Arizona 86303 United States
Yahoo
13 hours ago
- Yahoo
International Business Machines Corporation (IBM): Don't Abandon The Stock, Warns Jim Cramer
We recently published . International Business Machines Corporation (NYSE:IBM) is one of the stocks Jim Cramer recently discussed. International Business Machines Corporation (NYSE:IBM) is one of Cramer's favorite technology stocks. Throughout this year, the CNBC TV host has expressed optimism about the firm's CEO and the firm's consistency in winning contracts for its enterprise computing business. International Business Machines Corporation (NYSE:IBM)'s shares fell by 7.6% after the firm's latest earnings report saw software revenue of $7.39 billion miss analyst estimates of $7.43 billion. Cramer discussed the earnings report: 'Most of the news is good this morning, IBM. I still think not as bad, uh, Chipotle we have to talk about. Copyright: believeinme33 / 123RF Stock Photo Previously, he discussed potential future International Business Machines Corporation (NYSE:IBM) share price movement: 'Oh, I like IBM very much. I mentioned Ben Wright earlier. I think that Ben, he's really turned me on to this stock. We did a very positive piece about it. I think it goes, I'm going to say not much higher but creeping higher over time, and that's actually a great place to be. So I like IBM.' While we acknowledge the potential of IBM as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . READ NEXT: 30 Stocks That Should Double in 3 Years and 11 Hidden AI Stocks to Buy Right Now. Disclosure: None. This article is originally published at Insider Monkey.