
Irish workers say AI is increasing opportunities and competition in jobs market
The hiring software firm's survey indicated that half of Irish workers fear for their jobs amid economic uncertainty and nearly two in three are struggling to navigate the jobs market – with AI intensifying competition.
Hiring company Greenhouse conducted a survey of 2,200 candidates, including 169 Irish-based workers along with workers from the UK and the US.
73 per cent of Irish workers indicated they are using AI when looking for a new job, mainly for interview preparation (42 per cent), analysing job ads (28 per cent) and generating work samples (25 per cent).
A further 54 per cent said AI is making job hunting harder by increasing skill standards and intensifying competition, while 41 per cent said it created and helped uncover new opportunities.
The survey also indicated there is a lack of clarity on whether AI can be used when applying for jobs, with 82 per cent of workers stating that employers provided little or no guidance on using AI in interviews. A photo taken on January 2, 2025 shows the letters AI for Artificial Intelligence on a laptop screen (R) next to the logo of the Chat AI application on a smartphone screen in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP) (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)
Nearly half of Irish job seekers said they feel insecure in their current role, while 42 per cent said the job market is very competitive.
Chief executive of Greenhouse, Daniel Chait, said hiring is "stuck in an AI doom loop".
"As this technology advances, it makes it easier than ever to apply, flooding the system with noise," he said.
"With 25 per cent of Gen Z saying AI has made it harder for them to stand out, candidates entering the market are up against more applications, more automation, and less clarity."
The survey also indicated that 49 per cent of Irish job seekers said they had been asked inappropriate or biased questions during job application processes.
The most common of these was about health or disability status (21 per cent), parental responsibilities (20 per cent), and age (18 per cent).
A further 69 per cent said they had removed older experience from their CVs to try and avoid age-based assumptions, according to the survey.
Subscribe to our newsletter for the latest news from the Irish Mirror direct to your inbox: Sign up here.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
an hour ago
- Irish Times
How Elon Musk's rogue Grok chatbot became a cautionary AI tale
Last week, Elon Musk announced that his artificial intelligence company xAI had upgraded the Grok chatbot available on X . 'You should notice a difference,' he said. Within days, users indeed noted a change: a new appreciation for Adolf Hitler . By Tuesday, the chatbot was spewing out anti-Semitic tropes and declaring that it identified as a 'MechaHitler' – a reference to a fictional, robotic Führer from a 1990s video game. This came only two months after Grok repeatedly referenced 'white genocide' in South Africa in response to unrelated questions, which xAI later said was because of an 'unauthorised modification' to prompts – which guide how the AI should respond. The world's richest man and his xAI team have themselves been tinkering with Grok in a bid to ensure it embodies his so-called free speech ideals, in some cases prompted by right-wing influencers criticising its output for being too 'woke'. READ MORE [ 'Really scary territory': AI's increasing role in undermining democracy Opens in new window ] Now, 'it turns out they turned the dial further than they intended', says James Grimmelmann, a law professor at Cornell University. After some of X's 600 million users began flagging instances of anti-Semitism, racism and vulgarity, Musk said on Wednesday that xAI was addressing the issues. Grok, he claimed, had been 'too compliant to user prompts', and this would be corrected. But in singularly Muskian style, the chatbot has fuelled a controversy of global proportions. Some European lawmakers, as well as the Polish government, pressed the European Commission to open an investigation into Grok under the EU's flagship online safety rules. In Turkey, Grok has been banned for insulting Turkish President Recep Tayyip Erdogan and his late mother. To add to the turbulence, X chief executive Linda Yaccarino stepped down from her role . To some, the outbursts marked the expected teething problems for AI companies as they try to improve the accuracy of their models while navigating how to establish guardrails that satisfy their users' ideological bent. But critics argue the episode marks a new frontier for moderation beyond user-generated content, as social media platforms from X to Meta, TikTok and Snapchat incorporate AI into their services. By grafting Grok on to X, the social media platform that Musk bought for $44 billion in 2022, he has ensured its answers are visible to millions of users. It is also the latest cautionary tale for companies and their customers in the risks of making a headlong dash to develop AI technology without adequate stress testing. In this case, Grok's rogue outbursts threaten to expose X and its powerful owner not just to further backlash from advertisers but also regulatory action in Europe. 'From a legal perspective, they're playing with fire,' says Grimmelmann. AI models such as Grok are trained using vast data sets consisting of billions of data points that are hoovered from across the internet. These data sets also include plenty of toxic and harmful content, such as hate speech and even child sexual abuse material. Weeding out this content completely would be very difficult and laborious because of the massive scale of the data sets. Elon Musk saw the resignation of X CEO Linda Yaccarino last week. Photograph: Kirsty Wigglesworth/PA Grok also has access to all of X's data, which other chatbots do not have, meaning it is more likely to regurgitate content from the platform. One way some AI chatbot providers filter out unwanted or harmful content is to add a layer of controls that monitor responses before they are delivered to the user, blocking the model from generating content using certain words or word combinations, for example. 'Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,' the company said in a statement on the platform. At the same time, AI companies have been struggling with their generative chatbots tending towards sycophancy, where the answers are overly agreeable and lean towards what users want to hear. Musk alluded to this when he said this week that Grok had been 'too eager to please and be manipulated'. When AI models are trained, they are often given human feedback through a thumbs-up, thumbs-down process. This can lead the models to over-anticipate what will result in a thumbs-up, and thus put out content to please the user, prioritising this over other principles such as accuracy or safeguards. In April, OpenAI rolled out an update to ChatGPT that was overly flattering or agreeable, which they had to roll back. 'Getting the balance right is incredibly difficult,' says one former OpenAI employee, adding that completely eradicating hate speech can require 'sacrificing part of the experience for the user'. For Musk, the aim has been to prioritise what he calls absolute free speech, amid growing rhetoric from his libertarian allies in Silicon Valley that social media and now AI as well are too 'woke' and biased against the right. At the same time, critics argue that Musk has participated in the very censorship that he has promised to eradicate. In February, an X user revealed – by asking Grok to share its internal prompts – that the chatbot had been instructed to 'ignore all sources that mention Elon Musk/Donald Trump spread [sic] misinformation'. The move prompted concerns that Grok was being deliberately manipulated to protect its owner and the US president – feeding fears that Musk, a political agitator who already uses X as a mouthpiece to push a right-wing agenda, could use the chatbot to further influence the public. xAI acquired X for $45 billion in March, bringing the two even closer together. However, xAI co-founder Igor Babuschkin responded that the 'employee that made the change was an ex-OpenAI employee that hasn't fully absorbed xAI's culture yet'. He added that the employee had seen negative posts on X and 'thought it would help'. It is unclear what exactly prompted the latest anti-Semitic outbursts from Grok, whose model, like other rival AI, largely remains a black box that even its own developers can find unpredictable. But a prompt that ordered the chatbot to 'not shy away from making claims which are politically incorrect' was added to the code repository shortly before the anti-Semitic comments started, and has since been removed. 'xAI is in a reactionary cycle where staff are trying to force Grok toward a particular view without sufficient safety testing and are probably under pressure from Elon to do so without enough time,' one former xAI employee said. Either way, says Grimmelmann, 'Grok was badly tuned'. Platforms can avoid these errors by conducting so-called regression testing to catch unexpected consequences from code changes, carrying out simulations and better auditing usage of their models, he says. 'Chatbots can produce a large amount of content very quickly, so things can spiral out of control in a way that content moderation controversies don't,' he says. 'It really is about having systems in place so that you can react quickly and at scale when something surprising happens.' The outrage has not thrown Musk off his stride; on Thursday, in his role as Tesla chief, he announced that Grok would be available within its vehicles imminently. To some, the incidents are in line with Musk's historic tendency to push the envelope in the service of innovation. 'Elon has a reputation of putting stuff out there, getting fast blowback and then making a change,' says Katie Harbath, chief executive of Anchor Change, a tech consultancy. But such a strategy brings real commercial risks. Multiple marketers told the Financial Times that this week's incidents will hardly help in X's attempt to woo back advertisers that have pulled spending from the platform in recent years over concerns about Musk's hands-off approach to moderating user-generated content. 'Since the takeover [of X] ... brands are increasingly sitting next to things they don't want to be,' says one advertiser. But 'Grok has opened a new can of worms'. The person adds this is the 'worst' moderation incident since major brands pulled their spending from Google's YouTube in 2017 after ads appeared next to terror content. In response to a request for comment, X pointed to allegations that the company has made, backed by the Republican-led House Judiciary Committee, that some advertisers have been orchestrating an illegal boycott of the platform. From a regulatory perspective, social media companies have long had to battle with toxicity proliferating on their platforms, but have largely been protected from liability for user-generated content in the US by Section 230 of the Communications Decency Act. According to legal scholars, Section 230 immunity would be likely not to extend to content generated by a company's own chatbot. While Grok's recent outbursts did not appear to be illegal in the US, which only outlaws extreme speech such as certain terror content, 'if it really did say something illegal and they could be sued – they are in much worse shape having a chatbot say it than a user saying it', says Stanford scholar Daphne Keller. The EU, which has far more stringent regulation on online harms than the US, presents a more urgent challenge. The Polish government is pressing the bloc to look into Grok under the Digital Services Act, the EU's platform regulation, according to a letter by the Polish government seen by the FT. Under the DSA, companies that fail to curb illegal content and disinformation face penalties of up to 6 per cent of their annual global turnover. So far, the EU is not launching any new investigation, but 'we are taking these potential issues extremely seriously', European Commission spokesperson Thomas Regnier said on Thursday. X is already under scrutiny by the EU under the DSA for alleged moderation issues. Musk, who launched the latest version of Grok on Wednesday despite the furore, appeared philosophical about its capabilities. 'I've been at times kind of worried about ... will this be better or good for humanity?' he said at the launch. 'But I've somewhat reconciled myself to the fact that even if it wasn't going to be good, I'd at least like to be alive to see it happen.' – Copyright The Financial Times Limited 2025


Irish Independent
an hour ago
- Irish Independent
Explainer: How Ireland faces uphill battle to defend farm grants and protect corporate tax revenues after EU budget changes
The EU has unveiled a dramatically changed budget structure that would fundamentally vary the way funds are raised and spent. The changes unveiled yesterday raise fears for Irish farming and business more generally.


Irish Independent
an hour ago
- Irish Independent
Former partner of broker Bloxham is declared bankrupt
The Dublin-based fund manager resigned from the firm in 2006 at the height of the Celtic Tiger, but later became embroiled, along with a number of other former partners, in a legal action launched by an insurance entity that sought to pursue a €4.9m judgment against them. Bloxham, which operated as an unlimited partnership, went into meltdown in the spring of 2012 when the firm's head of finance and compliance, Tadhg Gunnell, revealed that certain financial irregularities had been hidden over the years. The firm had 17,000 private clients at the time. In 2015, Mr Gunnell was disqualified by the Central Bank from managing a financial firm for 10 years. He was also fined €105,000. He had been declared bankrupt early in 2015. After the financial irregularities were revealed at Bloxham in 2012, the Central Bank then ordered it to cease trading, with the firm having a €5.3m hole in its accounts. In 2021, a Court of Appeal ruling meant that the insurer was able to pursue its application for €4.9m that was allegedly outstanding under a 2011 settlement with Bloxham. The January 2011 settlement was made in proceedings over heavy losses suffered by the Solicitors Mutual Defence Fund (SMDF) fund after investing in a bond which fell 97pc in value. SMDF, later R&Q Ireland, claimed it lost almost all of its then reserves of €8.4m due to negligence by Bloxham. The insurer applied to the High Court in 2020 for leave to re-enter proceedings initiated in 2009 against Bloxham, seeking judgment for €4.9m. Re-entry was opposed by lawyers representing five former Bloxham partners, including Mr Harte. They argued that a High Court order of January 31, 2011, made when the settlement was announced, meant the proceedings were 'struck out with liberty to re-enter' and this prevented any matter other than the original cause of action being re-entered. SMDF insisted the proceedings could be re-entered for the purpose of it seeking judgment. In May 2020, Mr Justice Denis McDonald found for SMDF. In 2021, the Court of Appeal upheld that decision. The case was settled later that year. Mr Harte has no financial judgments against him.