logo
The AI-Readiness Crisis: Why Businesses Can't Wait for Universities

The AI-Readiness Crisis: Why Businesses Can't Wait for Universities

Forbes17-06-2025
Jyoti Shah is a Director of Applications Development, a GenAI tech leader, mentor, innovation advocate, and Women In Tech advisor at ADP.
More than half of recent college graduates don't know if their colleges have prepared them for "the use of generative AI," according to a Cengage Group cited by Higher Ed Dive, while about 66% of employers feel that potential job candidates should have "foundational knowledge" of generative AI tools.
This disconnect is causing problems for the hiring process—especially, in my experience, for recent computer science grads.
In my role, I've spoken with dozens of bright developers who have never used GitHub Copilot, have no idea how machine-learning pipelines or observability frameworks work and don't comprehend how SHAP or LIME describe models. Let me be clear: This is a reflection of how quickly industry changes and how slowly academia adjusts, not of these developers' abilities.
Having managed international AI engineering teams, I've seen how these gaps manifest locally as longer onboarding times, a lack of trust in AI tools and lost chances to streamline development cycles. To solve this challenge, companies can't wait for universities to catch up to AI. They need to find ways to train engineers to adapt to this ever-changing technology.
According to a 2024 survey from Kyndryl, 71% of business leaders feel their workforce is not yet ready to leverage AI, with many citing the lack of "skilled talent needed to manage AI" as a major reason. Companies without an AI-savvy workforce are compelled to postpone deployments, contract out critical functions or deal with poor product quality.
The good news is that, according to McKinsey, nearly half of employees want more formal AI training, but other research shows that only 31% of employers are providing AI training. When AI talent can't be hired fast enough, it must be developed from within.
Big Tech is already working on this, with Microsoft, Google, IBM, Intel, SAP and Cisco collectively planning to train over 100 million workers.
I've seen the success of these types of programs at my own company, where we set up an internal AI bootcamp, started hands-on labs that were directly related to real-world projects and matched junior engineers with mentors who had experience with AI.
To promote practical upskilling, we also host project-focused webinars, arrange hackathons with an AI focus and assign structured learning paths on sites like Udemy to guarantee ongoing improvement.
Based on these experiences, here are five ways to bridge the AI skills gap at your organization:
1. Launch an internal AI learning program. Instead of using pre-made tutorials, create learning tracks centered on actual issues that your engineers encounter, such as using AI for CI/CD optimization, auto-generating test cases or enhancing search relevance with natural language processing.
2. Make AI a core part of DevOps. AI is not an "optional add-on." Tools like Amazon CodeWhisperer and GitHub Copilot are quickly taking over as the standard. Integrate them into documentation procedures, deployment flows and code reviews.
3. Promote peer mentorship. While formal training has its place, one-on-one, contextual mentoring frequently works better. Establish "AI champion" positions and facilitate team members' real-time shadowing and learning.
4. Measure AI tool adoption. Keep track of how often engineers use AI tools for backlog grooming, testing, debugging and code commits. Organize frequent hackathons or internal demonstrations centered on AI-enhanced engineering.
5. Partner with academic institutions. Talk to the faculty at the schools you hire a lot from. Provide real-world problem statements, fund student projects with an AI theme or collaborate on developing modular course materials. It helps your brand and the talent pipeline.
There is no longer any room for speculation regarding the move toward AI-native development. It has already arrived. In addition to writing code, developers are now expected to work with machines to direct and verify the output of AI. Businesses that don't facilitate this change will experience increased turnover, higher training expenses and decreased developer productivity.
On the other hand, companies will gain a compounding advantage if they make AI fluency a strategic capability for all engineers, not just data scientists. They will attract top talent who wish to build for the future rather than the past, ship more quickly and adapt better.
Don't wait for the AI gap to be filled by higher education. Begin within your organization. Invest in mentorship, align tooling with learning and cultivate an internal culture of AI fluency. The ability to code with AI is more important than simply knowing how to code in the future of software engineering.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google Empowers Indian Developers to Lead the Global AI Wave
Google Empowers Indian Developers to Lead the Global AI Wave

Entrepreneur

time27 minutes ago

  • Entrepreneur

Google Empowers Indian Developers to Lead the Global AI Wave

Central to the update was the introduction of Google's latest AI advancements for India, including localised deployment of its high-performance Gemini 2.5 Flash model, a set of new agentic AI tools in Firebase Studio, and partnerships aimed at nurturing local AI talent and solutions. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Google has announced a suite of new artificial intelligence initiatives tailored to empower Indian developers and startups. The announcements reflect Google's deep commitment to fostering innovation in India and accelerating the country's global leadership in AI development. Central to the update was the introduction of Google's latest AI advancements for India, including localised deployment of its high-performance Gemini 2.5 Flash model, a set of new agentic AI tools in Firebase Studio, and partnerships aimed at nurturing local AI talent and solutions. The efforts are part of Google's broader mission to support India's aspirations of becoming a global AI powerhouse. Dr Manish Gupta, Senior Director for India and APAC at Google DeepMind, emphasised the critical role of Indian developers. "Indian developers are literally writing the next chapter of India's success story, using AI capabilities to build real-world applications that are reaching millions of businesses and people across India and the world," said Dr Gupta. "We remain steadfast in bringing them our industry-leading, cutting-edge capabilities to accelerate their journeys, and India's leadership in a global AI-led future." The company also shared that based on third-party evaluations, the Android and Google Play ecosystem generated an estimated INR 4 lakh crore in revenue for app publishers and the wider economy in India during 2024. This ecosystem supported the creation of around 35 lakh jobs through direct, indirect, and spillover effects. In her remarks during a keynote conversation with Accel's Subrata Mitra, Preeti Lobana, Country Manager at Google India, highlighted the increasing momentum of India's digital innovation. "There's a buzz about the 'India Opportunity' driven by an ambitious national vision," she said. "India's developers are shaping how the world will use AI, and we're proud to stand with them." Among the key developments announced was the localisation of Gemini 2.5 Flash for Indian developers, ensuring improved speed and stability for use in sectors requiring low-latency, high-performance AI—particularly in healthcare, finance, and public services. Google's collaboration with three India AI Mission-backed startups—Sarvam, Soket AI, and Gnani—is furthering the development of India's Make-in-India AI models using its Gemma family of open models. Sarvam's recent release, Sarvam-Translate, a model built on Gemma for long-form text translation, was highlighted as a successful outcome of this collaboration. Additionally, Google is working with BharatGen at IIT Bombay to create indigenous speech recognition and text-to-speech tools in Indic languages, with the aim of enhancing accessibility and representation for India's diverse linguistic communities. Google also introduced new AI-powered features in Google Maps, including enhanced data on over 250 million places and India-specific pricing for the Maps Places UI Kit. These improvements are aimed at supporting developers working in India's expanding mobile commerce space, making it easier to integrate location-based features into their services. To further assist developers, the company announced new tools and capabilities in Firebase Studio, its cloud-based AI development workspace. Features such as optimized templates, collaborative workspaces, and backend integration are designed to help developers quickly build and launch full-stack AI applications at no initial cost. Recognising the growing potential of India's gaming sector, Google launched the 'Google Play x Unity Game Developer Training' program. Developed in collaboration with Unity and the Game Developer Association of India, the initiative offers 500 Indian developers access to over 30 hours of specialised online training. It is currently being rolled out in partnership with the governments of Tamil Nadu and Andhra Pradesh, with plans for further expansion. Google is also hosting the Gen AI Exchange Hackathon, encouraging developers to translate their AI skills into practical innovations across industries. The day also included a showcase by eight Indian startups: Sarvam, CoRover, InVideo, Glance, Dashverse, ToonSutra, Entri, and Nykaa, demonstrating impactful real-world applications built with Google's AI tools. The announcements underline Google's intent to strengthen India's position in the global AI landscape while empowering the local developer ecosystem with advanced tools and meaningful support.

Users can be ‘rude' To AI services to be more effecient & sustainable
Users can be ‘rude' To AI services to be more effecient & sustainable

Forbes

time29 minutes ago

  • Forbes

Users can be ‘rude' To AI services to be more effecient & sustainable

Closeup portrait of angry young man, about to have nervous breakdown, isolated on gray wall ... More background. Negative human emotions facial expression feelings attitude Do you speak AI? It's not a question that we're used to yet, but it might be soon. At its lower level, artificial intelligence obviously has a language in terms of the coding syntax, structure and software methodology used by the developers and data scientists who build it. It also has a language in terms of its data model, its employment of large and small language models and the data fabric that it operates in. But AI also has a human language. Users who have experimented with ChatGPT, Anthropic's Claude, Google Gemini, Microsoft Copilot, Deepseek, Perplexity, Meta AI through WhatsApp, or one of the enterprise platform AI services such as Amazon Code Whisperer will know that there's a right way and a wrong way to ask for automation intelligence. Being quite specific about your requests and structuring the language in a prompt with more precise descriptive terms to direct an AI service and narrow its options is generally a way of getting a more accurate result. Then there's the politeness factor. The Politics Of Politeness Although some analysis of this space and a degree of research suggests a polite approach is best when interacting with an AI service (it might help to be better humans, after all), there is a wider argument that says politeness isn't actually required as it takes up extra 'token' space… and that's not computationally efficient or good for the planet's datacenter carbon footprint. A token is a core unit of natural language text or some component of an image, audio clip or video, depending on the 'modality' of the AI processing happening; while 'sullen' is one token, 'sullenness' would more likely be two tokens: 'sullen' and 'ness' in total. All those please, thank yous and 'you're just awesome' interactions a user has with AI are not necessarily a good idea. So let's ask ChatGPT what to do… What ChatGPT thinks about politeness. Inference Complexity Scales With Length Keen to voice an opinion on this subject is Aleš Wilk, cloud software and SEO specialist at Apify, a company known for its platform that allows developers to build, deploy and publish web scrapers, AI agents and automation tools. 'To understand this rising topic of conversation further, we need to start by realising that every token a user submits to an AI language model represents a unit that is measurable in computational cost,' said Wilk. 'These models work and rely on 'transformer architectures', where inference complexity scales with sequence length, particularly due to the quadratic nature of self-attention mechanisms. Using non-functional language like 'please' or 'thank you' feels like a natural level of conversational dialogue. But, it can inflate prompt length by 15-40% without contributing to semantic precision or task relevance.' Looking at this from a technical and efficiency point of view, this is a hidden cost. Wilk explains that if we look at platforms such as GPT-4-turbo, for example, where the pricing and compute are token-based, verbosity in prompt design directly increases inference time, energy consumption and operational expenditure. Also he notes, empirical analyses suggest that 1,000 tokens on a state-of-the-art LLM can emit 0.5 to 4 grams of CO₂, depending on model size, optimization and deployment infrastructure. On a larger scale and across billions of daily prompts, unnecessary tokens can contribute to thousands of metric tons of additional emissions annually. 'This topic has become widely discussed, as it not only concerns cost, but also sustainability. Looking at GPU-intensive inference environments, longer prompts can drive up power draw, increase cooling requirements and reduce throughput efficiency. Why? Because as AI moves into continuous pipelines, agent frameworks, RAG systems and embedded business operations, for example, the marginal ineffectiveness of prompt padding can aggregate into a big environmental impact,' underlined Wilk. Streamlining User Inputs An optimization specialist himself, Wilk offers a potential solution by saying that one notion is that developers and data scientists could create a prompt design similar to how they write performance code, such as removing redundancy, maximizing functional utility and streamlining user inputs. In the same way that we use linters and profilers (code improvement tools) for software, we need tools to clean and token-optimize prompts automatically. For now, Wilks says he would encourage users to be precise and minimal with their prompts. 'Saying 'please' and 'thank you' to AI might feel polite, but it's polite pollution in computational terms,' he stated. Greg Osuri, founder of Akash, a company known for its decentralized compute marketplaceagrees that the environmental impact of AI is no longer just a peripheral concern, it is a central design challenge. He points to reports suggesting AI inference costs contribute to more than 80% of total AI energy consumption. The industry has spent the last couple of years pushing for bigger models, better performance and faster deployment, but AI inference, the process that a trained LLM model uses to draw conclusions from brand-new data, might be doing most of the damage right now. Language Models vs Google Search 'Each user query on LLM models consumes approximately 10 to 15 times more energy than a standard Google search. Behind every response lies an extremely energy-intensive infrastructure. This challenge isn't just about energy usage in abstract terms, we're talking about a whole supply chain of emissions that begins with a casual prompt and ends in megawatts of infrastructure demand and millions of gallons of water being consumed,' detailed Osuri, speaking to a closed press gathering this month. He agrees that a lot is being said around polite prompts and whether it is more energy-efficient to be rude (or at least direct and to the point) to AI; however, he says these conversations are missing the broader point. 'Most of the AI architecture today is inefficient by design. As someone who has spent years developing software and supporting infrastructure, it's surprising how little scrutiny we apply to prompt efficiency. In traditional engineering, we optimize everything. Strip any redundancies, track performance and reduce waste wherever it's possible. The real question is whether the current centralized architecture is fit for scale in a world that is increasingly carbon-constrained. Unless we start designing for energy as a critical constraint, we will continue training models and further accelerating our own limitations," he concluded. This discussion will inevitably come around to whether AI itself has managed to become sentient. When that happens, AI will have enough self-awareness and consciousness to have conscious subjective feelings and so be able to make an executive decision on how to manage the politeness vs. processing power balance. Until then, we need to remember that we are basically just using language models to generate content, be it code, words or images. If I Had An AI Hammer 'Being polite or rude is a waste of precious context space. What users are trying to accomplish is to get the AI to generate the content they want. The more concise and direct we are with our prompts, the better the output will be,' explained Brett Smith, distinguished software engineer and platform architect at SAS. 'We don't use formalities when we write code, so why should we use formalities when we write prompts for AI? If we look at LLMs as a tool like a hammer, we don't say 'please' when we hit a nail with a hammer. We just do it. The same goes for AI prompts. You are wasting precious context space and getting no benefits from being polite or rude.' The problem is, humans like empathy. This means that when an AI service answers in a chatty and familiar manner that is purpose-built to imitate human conversations, humans are more likely to want to be friendly in response. The general rule is, the more concise and direct users are with your prompts, the better the output will be. 'The AI is not sentient… and it does not need to be treated as such," asserted Smith. Stop burning up compute cycles, wasting datacenter electricity and heating up the planet with your polite prompts. I am not saying we 'zero-shot' every prompt [a term used to define when we ask an AI LLM a question or give it a task without providing any context or examples], but users can be concise, direct and maybe consider reading some prompt engineering guides. Use the context space for what it is meant for, generating content. From a software engineering perspective, being polite is a waste of resources. Eventually, you run out of context and the model will forget you ever told it 'please' and 'thank you' anyway. However, you may benefit as a person in the long term from being more polite when you talk to your LLM, as it may lead to you being nicer in personal interactions with humans." SAS's Smith reminds us that AI tokens are not free. He also envisages what he calls a 'hilarious hypothetical circumstance' where our please and thank you prompts get adopted by the software itself and agents end up adding in niceties when talking agent-to-agent. The whole thing ends up spinning out of control increasing the velocity the system wastes tokens, context space and compute power as the agent-to-agent communication grows. Thankfully, we can program against that reality, mostly. War On Waste Mustafa Kabul says that when it comes to managing enterprise supply chains at wider business level (not just in terms of software and data) prudent businesses have spent decades eliminating waste from every process i.e. excess inventory, redundant touchpoints, unnecessary steps. 'The same operational discipline must apply to our AI interactions,' said Kabul, in his capacity as SVP of data science, machine learning and AI at decision intelligence company Aera Technology. 'When you're orchestrating agent teams across demand planning, procurement and logistics decisions at enterprise scale, every inefficient prompt multiplies exponentially. Inside operations we've managed, we have seen how agent teams coordinate complex multi-step workflows - one agent monitoring inventory levels, another forecasting demand, a third generating replenishment recommendations. In these orchestrated operations, a single 'please' in a prompt template used across thousands of daily decisions doesn't just waste computational resources, it introduces latency that can cascade through the entire decision chain,' clarified Kabul. He says that just as we (as a collective business-technology community) have learned that lean operations require precision, not politeness, effective AI agent coordination demands the same 'ruthless efficiency' today. Kabul insists that the companies who treat AI interactions with the same operational rigor that they apply to their manufacturing processes will have a 'decisive advantage' in both speed and sustainability. Would You Mind, Awfully? Although the UK may be known for their unerring politeness, even the British will perhaps need to learn to drop the normal airs and graces we would normally consider a requisite part of normal civilities and social intercourse. The chatbot doesn't mind if you don't say please… and, if your first AI response isn't what you wanted, don't be ever so English and think you need to say sorry either.

MongoDB Intensifies Focus on Subscriptions: What's the Path Ahead?
MongoDB Intensifies Focus on Subscriptions: What's the Path Ahead?

Yahoo

timean hour ago

  • Yahoo

MongoDB Intensifies Focus on Subscriptions: What's the Path Ahead?

MongoDB MDB is laying the groundwork for sustained subscription growth through a focused, long-term approach. The company is doubling down on the enterprise segment, where it sees the biggest potential, while also scaling its self-serve channel, which it calls a powerful engine for future growth. Many of the new users signing up for Atlas are first-time MongoDB customers, and the company is investing in onboarding and developer education to ensure they get the most out of the platform.A major part of MongoDB's strategy involves AI. The company is combining real-time data, search and retrieval into a single platform to make it easier for developers to build intelligent applications. With its Voyage AI acquisition, MongoDB has added advanced embedding and reranking models. The latest version, Voyage 3.5, improves accuracy while significantly lowering storage costs. MongoDB also plans to roll out a new feature that allows developers to generate embeddings directly from data within its expand global adoption, MongoDB is reaching out to developers in multiple languages and ecosystems. It has added documentation in Mandarin, Portuguese, Korean and Japanese. At the same time, it is targeting relational developers with certifications, training and self-serve courses to help them transition into modern app the first quarter of fiscal 2026, MongoDB's Subscription revenues were $531.5 million (96.8% of total revenues), which increased 21.6% year over year. Total customer count increased 16.05% year over year to 57,100. MDB is seeing solid growth in its subscriptions, and the ongoing, as well as the planned initiatives, are going to drive further growth in the coming quarters. The Zacks Consensus Estimate for second-quarter 2025 Subscription revenues is pegged at $537.49 million. MDB Competes for Subscriptions in the Database Market MongoDB competes with tech behemoths, such as Amazon AMZN and Microsoft MSFT, providing subscription-based services for its cloud is enhancing its AWS database services by adding serverless options that automatically scale based on demand, removing the need for manual capacity management, making it easier for customers. Amazon is also integrating AI capabilities into its database offerings to support more advanced and dynamic application is doing something similar by enhancing Azure's database services by rolling out serverless capabilities, like autoscaling and per???second billing in Azure Cosmos DB and Azure SQL, to make usage more flexible and cost-effective. Microsoft is also embedding AI directly into its databases, with features like vector search, semantic queries in SQL Server???2025 Preview, and deep integrations between Cosmos DB and the Azure AI ecosystem. MDB's Share Price Performance, Valuation and Estimates MDB shares have lost 2.3% in the year-to-date (YTD) period, underperforming the Zacks Internet – Software industry's growth of 18.5% and the Zacks Computer and Technology sector's return of 9.7%. MDB's YTD Price Performance Image Source: Zacks Investment Research From a valuation standpoint, MongoDB stock is currently trading at a forward 12-month Price/Sales ratio of 7.50X compared with the industry's 5.85X. MDB has a Value Score of F. MDB Valuation Image Source: Zacks Investment Research The Zacks Consensus Estimate for second-quarter fiscal 2026 earnings is pegged at 64 cents per share, which has remained unchanged over the past 30 days, indicating an 8.57% year-over-year decline. MongoDB, Inc. Price and Consensus MongoDB, Inc. price-consensus-chart | MongoDB, Inc. Quote MongoDB currently carries a Zacks Rank #2 (Buy). You can see the complete list of today's Zacks #1 Rank (Strong Buy) stocks here. Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report Inc. (AMZN) : Free Stock Analysis Report Microsoft Corporation (MSFT) : Free Stock Analysis Report MongoDB, Inc. (MDB) : Free Stock Analysis Report This article originally published on Zacks Investment Research ( Zacks Investment Research Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store