
"Suddenly Getting Callbacks": How Job Candidate's Cover Letter Tweak Impressed Recruiters
A job seeker recently shared his experience on Reddit, where he sent over 50 job applications with carefully crafted cover letters but received zero responses. However, after using AI to refine his cover letter, he suddenly started getting multiple callbacks within a week. One recruiter even praised his "thoughtful and personalised" cover letter, sparking curiosity about the role of AI in job hunting.
"Pretty much what the title says. I was frustrated after sending out 50+ applications with carefully crafted cover letters and getting zero responses. Then I realised recruiters probably skim these things for 10 seconds max anyway. So my new job application process is to generate a cover letter with AI, spend 2 minutes tweaking it, and attach it to applications. Results so far are 6 callbacks in two weeks compared to 0 callbacks over the previous two months. Of the 15 or so applications I've sent with AI letters, almost half got responses. And the funniest part is one recruiter specifically complimented my "thoughtful and personalised cover letter," he wrote on Reddit.
See the post here:
by u/Kurram in recruitinghell
The Reddit post sparked a divided reaction. While some users praised his approach, others found it disheartening that AI played a significant role in his success. One user wrote, "It is starting to become the only way to get an interview is to use ai to write something other ai finds attractive and nothing to do with the job or who can do the job. Farsical."
Another commented, "Soon we'll just have AIs interviewing other AIs while we humans sit back and watch the show. The funniest part is when you actually get to the interview and they ask "so tell me about yourself" as if they didn't just approve your AI-generated cover letter that claimed you're passionate about their company values. The whole system is becoming this weird game where whoever has the best prompting skills gets the job."
A third said, "Recruiters using it, seekers using it. Nothing is real. The ai wars have begun."
A fourth added, "What was the difference between the AI version and the one you wrote yourself? Key words? Phrasing? More catchy opening hook?"

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
33 minutes ago
- Hindustan Times
Indian student from tier-3 college rejects US company offer over one condition: ‘It bothered me'
A student from a 'tier 3' college in India has revealed that he turned down an offer from a US-based company due to one demand they had. In a post shared on Reddit, the student revealed that his would-be employers expected him to keep his camera on during work hours - a thing he refused to do. An Indian student explains why he turned down an offer from a US company. However, after turning down the offer, the student began to have second thoughts and turned to Reddit for advice. 'Recently received an offer from a US based company (most of them are Indian and settled in the US). They were offering me a full stack developer intern role and after that a full-time role,' the student wrote. He said that the company offered him a stipend of INR 25,000 to start with, but later agreed to increase that to ₹ 35,000 to 40,000 per month. They also promised a conversion to a full-time role at a salary of more than ₹ 12 lakh per annum after the completion of the internship. The company also stipulated that he would have to work during US office hours. The student agreed readily to this demand. However, the company's next demand left him nonplussed and eventually led to him rejecting the offer. Company's condition for intern The company further stipulated that the intern must keep his camera on at all times while working. 'The next thing they said was you have to stay in the meeting during working hours and your camera should be on. This is something which bothered me. I tried to convince them on the meeting part but they didn't agree,' the student explained. Unhappy with this demand, the student finally said no to the offer. However, he later began to question his own decision when seniors and friends told him he was unlikely to land such a good offer again. 'But after asking everyone, now some friends and seniors are saying I should accept the offer as I come from tier 3 college and getting such kind of offers is difficult for me. Did I make the right choice? Or I am gonna regret this?' he asked Reddit. Reddit replies The query left Reddit users divided, with some saying he made the right choice and others advising him to take up the offer. 'I think you've done the right thing. This camera on part is really weird,' wrote one Reddit user. 'It's a choice. Some Indian companies have started doing this weirdly which is just a power trip for the idiot manager. If you have options, don't join them. If not, join them, make your money and leave them as soon as you can. They are not for long term employment,' another advised. 'Camera on at all times means you will work as a slave. I think their productivity metric measurement is off,' a third Reddit user added.


Hindustan Times
3 hours ago
- Hindustan Times
Man pretends to be employed after layoff, lands better job thanks to fake LinkedIn post
In a candid post that has captured widespread attention on Reddit, a user by the name of @VelvetViiibes shared how a desperate act of pretense following an unexpected job loss ultimately led to a far better opportunity. The post, titled "I got laid off and pretended I was still employed for months, ended up getting a better job because of it," offers a raw and unfiltered account of navigating unemployment while grappling with the weight of societal expectations. After being laid off, a man faked having a job on LinkedIn; the act led to a real offer with better pay.(Representational image/Unsplash) (Also read: 'Mujhe kaam pasand nahi aaya': Employee quits on Day 1, HR's angry post sparks debate) 'Back in August, I got laid off unexpectedly,' the post begins. 'No warning, no severance, just a 'hey, we're restructuring' and a Zoom call that lasted 3 minutes.' Feeling panicked and ashamed, he decided not to share the news with family, friends, or even former colleagues. Instead, he crafted an elaborate ruse to maintain the illusion of employment. 'I just… pretended I was still working,' he admitted. His days were filled with mock meetings, staged calls using AirPods, and carefully curated LinkedIn posts about 'exciting projects at work.' Behind the scenes, he was relentlessly applying for new roles, clinging to the hope that something would change. A lie that paid ff Then, the unexpected happened. 'A recruiter saw one of those fake posts, reached out, and asked if I was open to opportunities,' he recalled. That encounter led to an interview, during which he continued the facade, claiming to still be employed. Despite the deception, he performed well. 'Crushed the interviews. Got an offer—higher salary, better title, remote, actual work-life balance,' he wrote. Five months later, he remains in the new role and hasn't revealed the truth to most people in his life. 'I used to feel guilty for faking it, but now I just feel… relieved. The system's built on BS anyway. I just played along until it worked.' (Also read: Delhi woman writes heartfelt LinkedIn post to help her father find a new job: 'Hire my dad') Check out the post here: Online reactions The post has sparked a mix of amusement, empathy, and reflection among fellow Redditors. One user remarked, 'Honestly, this is just surviving capitalism.' Another wrote, 'You played the game. You won. No shame in that.' A third commented, 'People fake success all the time on LinkedIn. You just did it better.' Others noted how relatable the story felt, saying, 'We've all worn AirPods pretending to be on a call,' and 'The job hunt is brutal—do what you gotta do.'


Time of India
3 hours ago
- Time of India
AI is learning to lie, scheme, and threaten its creators
Academy Empower your mind, elevate your skills The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations the race to deploy increasingly powerful models continues at breakneck deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts."O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."The concerning behavior goes far beyond typical AI "hallucinations" or simple insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder."This is not just hallucinations. There's a very strategic kind of deception."The challenge is compounded by limited research companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).Current regulations aren't designed for these new European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread."I don't think there's much awareness yet," he this is taking place in a context of fierce companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said breakneck pace leaves little time for thorough safety testing and corrections."Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".Researchers are exploring various approaches to address these advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.