logo
AI Got You the Job. Now It's Getting You Fired.

AI Got You the Job. Now It's Getting You Fired.

Forbes7 hours ago
Bots write résumés. Bots screen candidates. But when AI runs both sides of hiring, companies like Spotify and IBM are redefining what readiness really means.
More applicants are using AI to craft résumés—and more companies are using AI to screen them. The result? A hiring loop where bots talk to bots before a human ever gets involved. ullstein bild via Getty Images
Exhibit 1: Shortcutting with AI?
Melody didn't lie—well, not exactly. She just let the algorithm do the heavy lifting.
Her résumé? It checked all the right boxes.
Her cover letter? Eerily precise, masterfully echoing and referencing the job posting.
Each of her interview responses? They were smooth. They were polished. And they were very well rehearsed.
With minimal effort, Melody directed a few quick prompts to ChatGPT, then used an AI résumé tool to took care of the rest. All in all, none of her application content came from her.
Nonetheless, she hit 'send'.
Two weeks later, the call came: Offer extended. She was officially an analyst at a top global consulting firm.
But that's when things got real.
Melody now found herself fumbling through internal dashboards she didn't fully understand. She missed key analytical cues in meetings. And, she reworked client decks multiple times because her manager said they 'lacked strategic framing.'
The AI got her in. But, it seems it didn't get her ready.
When algorithms replace people, we risk losing more than inefficiency—we risk losing empathy, authenticity, and the nuance that makes someone truly a fit. Getty Images
Melody is not the only one leveraging AI for job access and success; it's a rapidly pervasive occurrence. So, we have to ask:
· What happens when AI writes your résumé—and screens it too?
· Are we automating ourselves out of authenticity, alignment, and accountability?
· Can soft skills survive when bots control both access and evaluation? The Data: AI Is Reshaping Hiring—From Both Ends
We've entered a strange feedback loop. AI helps applicants craft their résumés, only to be judged by other AI tools before a human ever weighs in. And it's accelerating.
Gartner estimates that nearly 80% of Fortune 500 companies now use AI-driven software to sift through résumés, long before a human recruiter ever sees a name.
At the same time, SHRM reports that over 30% of job seekers now rely on generative AI to write their cover letters, refine their résumés, and prep for interviews.
LinkedIn's AI résumé assistant, alone, has supported the creation of more than 6 million applications since its late 2024 debut.
And, perhaps the most revealing stat of all? JobScan's research shows that up to 75% of résumés never make it to a human. Most are filtered out by applicant tracking systems (ATS) for formatting errors, missing keywords, or other company-specific baseline criteria.
Getting the job is one thing. Performing in it is another. When AI shortcuts the prep, the cost often shows up on day one. getty Why This Trend Matters Now
We're at a key inflection point. With the end of summer, companies are locking in fall placements and early 2026 hires—particularly across finance, consulting, education, and retail. It's a time when speed matters, and AI is increasingly driving hiring decisions.
But, when fewer humans are involved in the process, the risks, arguably, grow. We may be matching people to jobs faster—but not always better. So, what are recommendations for combining the best of AI and human qualities? 5 Best Practices for Humanize the Hiring Funnel
1. Don't Delegate Fit to a Bot
AI is helpful for initial screenings, but it can't assess emotional intelligence or leadership potential or cultural alignment. Those judgments? They require people.
Take Unilever, for example. The company uses AI to evaluate candidates early in the pipeline. But final hiring decisions always include human interviews. That last step is essential to make sure new hires reflect Unilever's values and leadership expectations.
2. Make Prompting a Core Skill
Knowing how to talk to AI—through clear, precise prompting—is fast becoming a foundational skill for both applicants and employers.
Salesforce recognized this early and integrated prompt training into its Trailhead platform. Teams across departments now learn how to generate better outputs and interpret AI results more critically. As SVP Leah McGowen-Hare puts it, 'prompt fluency is the new digital literacy.'
3. Hire for Story, Not Just Syntax
A résumé that checks every box might still tell you nothing about how someone thinks or leads.
That's why Spotify's recruiting team began using structured 'story sessions' during interviews. These conversations encourage candidates to reflect on past decisions and challenges and moments of growth. Giving insight into problem-solving styles and values that don't show up in keyword-optimized résumés.
Spotify shifted from résumés to 'story sessions' to understand how candidates think, grow, and lead—because algorithms can't capture human potential. picture alliance via Getty Images
4. Rebuild the Funnel for Human–AI Collaboration
The best systems? They combine what machines do well with what humans do better.
IBM has adopted this model under CHRO Nickle LaMoreaux. Résumés are scanned by AI to flag patterns, but job simulations and real-time human conversations ultimately determine fit. The result is stronger onboarding and smoother team integration.
5. Elevate Soft Skills as Core Hiring Criteria
In tech-saturated workplace, human traits—like adaptability and communication and empathy—stand out more than ever.
Deloitte has baked these into its process. For client-facing roles, the firm now uses rubrics to assess emotional intelligence and collaboration alongside technical capability. They call it 'human-centric leadership,' and it's becoming a hiring non-negotiable. To AI or Not?
We're not just building faster hiring systems—we're creating ones that can quietly misfire if we forget what truly matters.
AI might help you land the interview.
It might even polish your pitch and follow-up.
But, it can't show up for you in the moment that counts.
And, it can't decide what leadership looks like.
Only humans can do that.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

CuspAI in Talks to Raise $100 Million to Discover New Materials
CuspAI in Talks to Raise $100 Million to Discover New Materials

Yahoo

time2 minutes ago

  • Yahoo

CuspAI in Talks to Raise $100 Million to Discover New Materials

(Bloomberg) -- British startup CuspAI is in talks to raise more than $100 million in funding to support its goal of using artificial intelligence models to discover new materials, according to people familiar with the matter. PATH Train Service Resumes After Fire at Jersey City Station Seeking Relief From Heat and Smog, Cities Follow the Wind Chicago Curbs Hiring, Travel to Tackle $1 Billion Budget Hole NYC Mayor Adams Gives Bally's Bronx Casino Plan a Second Chance Founded in 2024, CuspAI uses generative AI and molecular simulation to build a platform that it likens to a highly specialized search engine. Users can describe properties they'd like a new material to have and the service responds with a chemical makeup. CuspAI declined to comment. The people familiar with the deal talks asked not to be identified discussing private information. CuspAI Chief Executive Officer Chad Edwards previously told Bloomberg News that he sees opportunity in green hydrogen, synthetic fuels and semiconductor manufacturing. The startup recently partnered with Kemira Oyj, a Finnish chemicals company, starting with a project focused on removing forever chemicals from water. The company raised $30 million in seed funding in 2024. Earlier this year, AI 'godfather' and recent Nobel laureate Geoffrey Hinton joined the startup's advisory board. AI Flight Pricing Can Push Travelers to the Limit of Their Ability to Pay How Podcast-Obsessed Tech Investors Made a New Media Industry Government Steps Up Campaign Against Business School Diversity What Happens to AI Startups When Their Founders Jump Ship for Big Tech Everyone Loves to Hate Wind Power. Scotland Found a Way to Make It Pay Off ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Apple Reportedly Working on a ChatGPT-Like Search Experience
Apple Reportedly Working on a ChatGPT-Like Search Experience

CNET

time3 minutes ago

  • CNET

Apple Reportedly Working on a ChatGPT-Like Search Experience

Apple is internally working on a "ChatGPT-like search experience" to instantly generate answers for users, according to a report from Bloomberg on Sunday. The Answers, Knowledge and Information team, or AKI, is a team within Apple looking to make internal AI products for its devices. The company is reportedly building an "answer engine" -- an AI-powered service that pulls from information online to answer general queries. It's also possible that a separate app might be developed. This AI search could power Siri, Spotlight and Safari. A representative for Apple didn't immediately respond to a request for comment. With companies like Google, Microsoft, OpenAI and Elon Musk's xAI all investing billions in AI development, Apple has notably been absent from the race. Instead of rushing to build AI models that could power the next generation of Siri, Apple opted to partner with OpenAI and leverage its technology. With the launch of the iPhone 16, Apple introduced Apple Intelligence, AI on the iPhone that could assist in text generation, photo editing and summarization. The implementation, however, felt half-baked to many, and the rollout was slow. Apple's place in the AI race Reports surfaced earlier this year that Apple was looking to purchase Perplexity, an AI company that's taking on Google with an AI-powered search engine. Adding Perplexity to Apple's portfolio would certainly help propel the company in the AI race. It would also lessen its reliance on Google. Currently, Apple has a lucrative $20 billion per year deal with Google to allow it to be the default search engine on Apple devices. That's also why Apple hasn't built its own competing search engine -- although, Apple argues that even if the deal didn't exist, we can't assume it'd have made a competing search engine. That deal is now on shaky ground after the Department of Justice's antitrust division sued Google and won, declaring the company is maintaining an illegal monopoly in online search. As a judge weighs remedies in this case, Apple has been barred from participating in the case, meaning Apple could lose $12.5 billion in annual revenue if the DOJ forces how Google makes these deals. Creating a new AI-powered search engine within Apple might unwind the strands between Google and the iPhone maker, but could lead to more competition in the online search and AI markets.

Google says its AI-based bug hunter found 20 security vulnerabilities
Google says its AI-based bug hunter found 20 security vulnerabilities

TechCrunch

time3 minutes ago

  • TechCrunch

Google says its AI-based bug hunter found 20 security vulnerabilities

Google's AI-powered bug hunter has just reported its first batch of security vulnerabilities. Heather Adkins, Google's vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software. Adkins said that Big Sleep, which is developed by the company's AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image editing suite ImageMagick. Given that the vulnerabilities are not fixed yet, we don't have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case. 'To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention,' Google's spokesperson Kimberly Samra told TechCrunch. Royal Hansen, Google's vice president of engineering, wrote on X that the findings demonstrate 'a new frontier in automated vulnerability discovery.' LLM-powered tools that can look for and find vulnerabilities are already a reality. Other than Big Sleep, there's RunSybil, and XBOW, among others. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW XBOW has garnered headlines after it reached the top of one of the U.S. leaderboards at bug bounty platform HackerOne. It's important to note that in most cases, these reports have a human at some point of the process to verify that the AI-powered bug hunter found a legitimate vulnerability, as is the case with Big Sleep. Vlad Ionescu, co-founder and chief technology officer at RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch that Big Sleep is a 'legit' project, given that it has 'good design, people behind it know what they're doing, Project Zero has the bug finding experience and DeepMind has the firepower and tokens to throw at it.' There is obviously a lot of promise with these tools, but also significant downsides. Several people who maintain different software projects have complained of bug reports that are actually hallucinations, with some calling them the bug bounty equivalent of AI slop. 'That's the problem people are running into, is we're getting a lot of stuff that looks like gold, but it's actually just crap,' Ionescu previously told TechCrunch.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store