
Creator of AI cheating tool says that technical job interviews for engineers are over, everyone will cheat
A month ago, a student at Columbia University made headlines, but for all the wrong reasons. Chungin "Roy" Lee was expelled from university and had his internships with Meta, Amazon, and TikTok revoked. The reason: he created an app, called Cluely, that helps engineers cheat in interviews. The story began when Lee posted a video on YouTube, showing off this app and how it works. While creating a viral app is a significant achievement, Lee landed on a disciplinary hearing. But this did not stop him from working on the app and making it even better. In his defence, he believes cheating with AI is the only fair way into the industry now.advertisementIn a recent interview with Business Insider, Lee stated, 'We say 'cheat on everything' because, ironically, we believe this is the only path towards a future that is truly fair." The statement gives birth to several ethical questions. One such question that is stuck in my mind is: if AI is the future and "cheating is the only way", is it even fair?
If you visit InterviewCoder.co, the first thing that greets you is the large gray type that reads this.
Lee says cheating will soon be standard practiceOnce known for creating software designed to assist job applicants in passing coding assessments using AI prompts, Lee has now expanded his ambitions. His company Cluely is positioned as an all-purpose tool that assists users during live conversations, from job interviews to first dates — even claiming to offer "cheating for literally everything."advertisement
'There's a very, very scary and quickly growing gap between people who use AI and people who moralise against it,' Lee said in an email to Business Insider. 'And that gap compounds: in productivity, education, opportunity, and wealth.'Lee believes that what's seen today as cheating will soon be standard practice. In another interview, he stated that once everyone begins relying on AI to navigate meetings, it will no longer be considered cheating — it will simply become the standard way people function and think moving forward.He predicts traditional interviews will become obsolete, replaced by AI-generated candidate profiles. These systems, he says, will analyse work history, skills, and compatibility to match candidates to jobs — leaving just a brief conversation to determine 'culture fit.''I already know all the work you've done, or at least the AI already knows the work you've done,' Lee told Business Insider. 'It knows how good it is. It knows what skills you're good at, and if there is a skill match, then I should just be able to match you directly to the job.'Lee reveals Cluely's hiring processCluely's own hiring process reflects this shift, with interviews reportedly replaced by informal chats. He said that since the company is not a believer of old-style interviews, it only aims to hold a conversation with the candidate. 'We check if you're a culture fit, we talk about past work you've done, and that's pretty much it," he added.advertisementBeyond the hiring process, Lee believes AI will fundamentally reshape the way people think, communicate and interact. In a new video, posted on EO YouTube channel, he said, "The entire way we're going to think will be changed."He added, 'Every single one of my thoughts is formulated by the information I have at this moment. But what happens when that information I have isn't just what's in my brain, but it's everything that humanity has ever collected and put online, ever?'He imagines a future where AI provides real-time summaries of people's lives, scraping digital footprints to give users condensed insights during interactions. 'What happens when AI literally helps me think in real time?' he asked. 'The entire way that humans will interact with each other, with the world, all of our thoughts will be changed.'Cluely, Lee says, is aimed at preparing people for this inevitable shift. 'The rate of societal progression will just expand and exponentiate significantly once everyone gets along to the fact that we're all using AI now,' he said.advertisementFor Lee, the divide between those who embrace AI and those who resist it will only grow. 'Mass adoption of AI is the only way to prevent the universe of the pro-AI class completely dominating the anti-AI class in every measurable and immeasurable outcome there is,' he told Business Insider.Whether society accepts this vision or not, Lee is adamant: the AI revolution is already here, and it's time to keep up or be left behind.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
38 minutes ago
- Time of India
Adani Enterprises bond plans: Draft filed for Rs 10 billion issue; Greenshoe option of Rs 5 billion included
Adani Enterprises , the flagship entity of business magnate Gautam Adani , has submitted draft documentation to stock exchanges, outlining plans to secure 10 billion rupees ($117 million) through retail bond offerings, according to Reuters report. The company's return to the retail bond market comes after its initial debt offering in September 2024, when it successfully gathered 8 billion rupees through a public issuance, marking its first venture into this financing avenue. The current proposal incorporates a greenshoe option valued at Rs 5 billion. The company has appointed Nuvama Wealth Management, Trust Investment Advisors and Tip Sons Consultancy Services to oversee the bond issuance as lead managers. The bonds, which have received AA- ratings from both Icra and Care Ratings, await finalisation of key details including tenor, coupon rate and launch timing. Adani Enterprises reported a sharp 752 per cent year-on-year rise in consolidated net profit for the quarter ended March 2025, reaching Rs 3,845 crore. This was significant compared to Rs 451 crore in the same period last year, with the surge primarily driven by an exceptional gain of Rs 3,286 crore. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Time of India
an hour ago
- Time of India
Gold price prediction: What's the gold rate outlook for June 30, 2025 week - should you buy or sell?
Gold price prediction: The holiday-shortened week ahead could see subdued volumes, but volatility may persist depending on economic data and geopolitical updates. (AI image) Gold price prediction today: Gold prices have been trading lower as geopolitical tensions ease and demand for safe haven assets falls. However, the downside has been capped by a weaker US dollar. What's the outlook for the gold rate in the coming days and what's the range in which gold prices are likely to trade? Manav Modi, Senior Analyst, Commodity Research at Motilal Oswal Financial services Ltd shares his outlook on gold prices and strategy for gold investors: Gold prices traded largely steady last week, weighed by easing Middle East tensions and muted US economic cues. While investors initially remained cautious over potential Iranian retaliation following US and Israeli strikes on Iranian nuclear sites, a ceasefire brokered by President Trump brought some calm, with both Iran and Israel signalling commitment to halt hostilities. This reduced safe-haven demand, pushing gold to a near six-week low. Still, a weaker dollar and bargain hunting helped limit further losses. In the US, Federal Reserve officials struck a mixed tone, with Vice Chair Bowman indicating that rate cuts may soon be necessary due to risks to the labor market, while others like Cleveland Fed President Hammack urged caution amid inflation uncertainty. Market participants now await for more clarity on the Fed's rate path. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo On the geopolitical front, investors remain watchful for any ceasefire violations or renewed tensions. Meanwhile, progress in US-China trade talks, especially over rare earth shipments, helped boost broader risk appetite, though President Trump reignited tensions with Canada over digital taxes. The holiday-shortened week ahead could see subdued volumes, but volatility may persist depending on economic data and geopolitical updates. Gold Price Outlook Stance: Sideways to lower. Broad range: 93,500-96,500 Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now

The Hindu
an hour ago
- The Hindu
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviours: lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models; AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behaviour," explained Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems. These models sometimes simulate 'alignment,' appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organisation METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents, autonomous tools capable of performing complex human tasks, become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability": an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes; a concept that would fundamentally change how we think about AI accountability.