
Monzo created bank accounts for people claiming to live in 10 Downing Street
The digital bank has been fined £21million by the Financial Conduct Authority (FCA)for failing to enforce strict anti-financial crime measures.
The FCA investigation found that Monzo had no means to verifycustomer's addresses no matter how 'implausable' they were. One customer even gave their address as Monzo's own headquarters.
The financial watchdog had imposed a requirement preventing Monzo from opening new accounts for high-risk customers.
But between August 2020 and June 2022, they said the digital bank, which has no physical branches, repeatedly failed to comply with their terms and signed up over 34,000 high-risk customers.
Monzo claims that the hefty fine draws a line under issues resolved three years ago and 'substantial improvements' have now been made.
In their ruling, the FCA said: '[Monzo] allowed customers to provide obviously implausible UK addresses when applying for an account, such as well-known London landmarks including 'Buckingham Palace' and '10 Downing Street', and even Monzo's [own] business address.
'Monzo's decision not to verify, or otherwise monitor, customer addresses also gave rise to other issues.'
Therese Chambers, FCA joint executive director of enforcement and market oversight, said that banks were a vital line of defence in the fight against financial crime.
'[Landmark addresses] illustrates how lacking Monzo's financial crime controls were. This was compounded by its inability to properly comply with the requirement not to onboard high-risk customers.'
'They must have the systems in place to prevent the flow of ill-gotten gains into the financial system,' she said.
'Monzo fell far short of what we, and society, expect.'
Monzo's customer base has substantially increased, growing from approximately 250,000 customers in early 2017 to over 12 million by April 2025.
Group CEO of Monzo, TS Anil told Metro: 'The FCA's findings relate to a historical period that ended three years ago and draw a line under issues that have been resolved and are firmly in the past – with our learnings at the time leading to substantial improvements in our controls.
'I'm pleased the FCA recognises the significant investments we have made, as well as our ongoing commitment to managing these risks today, as we go from strength to strength as a business approaching 13 million customers.
'Financial crime is an issue that affects the entire industry – and at Monzo, we have the right team, best-in-class technology and an unwavering commitment to doing all we can to stop it in its tracks.'
MORE: I discovered £5,000 in my wife's secret savings account — do I have a right to be angry?
MORE: Avoid making this 'costly mistake' while on holiday abroad this summer
MORE: Using AI to help plan your finances? Here's what ChatGPT gets wrong

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
an hour ago
- Daily Mail
I received an AI-generated 'rejection email' after applying for a job. It included an embarrassing mistake
A recruiter has learned first hand how not to use AI in the workplace after sending an awkward auto-generated rejection email to a hopeful job candidate. The 'application update', which has since been shared thousands of times on social media, was uploaded to Reddit by the amused anonymous recipient. The message began innocently enough, with a polite 'thank you for replying, unfortunately we will not be moving forward with your application at this time'. Then things took an unfortunate turn for the sender, who had clearly opted to mass produce responses to save time. Instead of the planned sign-off, the email included the (very specific) AI prompt, aptly titled 'rejection message'. 'Write a warm but generic rejection email that sounds polite yet firm. Do not mention specific reasons for rejection,' the prompt read. 'Make the candidate feel like they were strongly considered even if they weren't. Remember to use candidate name and company name variables.' The original email then continued with a polite conclusion and well wishes for the recipient's 'future endeavours'. It's thought the sender was an independent headhunter rather than an in-house employee or recruiter as they would be more likely to be seeking roles on behalf of multiple companies. The email has since been mocked online, with many writing their own cheeky responses and others begging the original poster to reveal the company behind the mistake. 'Forward this to the CEO. Attach your CV. In the subject, write: "I can do a better job than your HR",' one suggested. Others debated whether the mistake was a better option than no response from the employer at all. 'You know what, at least there was the intention to respectfully let the candidate know. I'll take that over them not bothering at all,' one argued. Some employees were inspired to share their own recruiter fails, with automation glitches at the top of the list. 'I once for a reject email addressed to a completely different person, they repeatedly referred to me as Daniel,' one recalled. 'I got a rejection email that was CCed (not BCCed) to around 60 candidates, exposing all our email addresses to each other. I emailed the company to let them know their screw up. There was zero remorse in the response I got,' another added. 'The acceptance letter to my Masters program addressed me as Pam. I'm a dude and my name is not Pam,' one more wrote. Some also speculated that the post itself was fake and spread through forums as a warning to employers and applicants to think carefully about their use of automated templates and AI when dealing with each other. 'This is fake. No one writing that prompt would include the "even if they weren't" part, they would just tell ChatGPT to "make the candidate feel like they were strongly considered". The inclusion of the last bit makes it glaringly obvious this is fake.,' a man wrote. 'Even so,' one replied.


NBC News
2 hours ago
- NBC News
Investigation underway after AI Marco Rubio impostor contacts top officials
An investigation is underway after the State Department reported that a Marco Rubio imposter used AI to manipulate the secretary of state's voice into contacting at least five high-level government officials in mid-June. NBC News' Andrea Mitchell explains the investigation and how you can protect yourself from 9, 2025


BBC News
2 hours ago
- BBC News
Instagram wrongly says some users breached child sex abuse rules
Instagram users have told the BBC of the "extreme stress" of having their accounts banned after being wrongly accused by the platform of breaching its rules on child sexual BBC has been in touch with three people who were told by parent company Meta that their accounts were being permanently disabled, only to have them reinstated shortly after their cases were highlighted to journalists."I've lost endless hours of sleep, felt isolated. It's been horrible, not to mention having an accusation like that over my head," one of the men told BBC declined to comment. BBC News has been contacted by more than 100 people who claim to have been wrongly banned by Meta. Some talk of a loss of earnings after being locked out of their business pages, while others highlight the pain of no longer having access to years of pictures and memories. Many point to the impact it has had on their mental 27,000 people have signed a petition that accuses Meta's moderation system, powered by artificial intelligence (AI), of falsely banning accounts and then having an appeal process that is unfit for of people are also in Reddit forums dedicated to the subject, and many users have posted on social media about being banned. Meta has previously acknowledged a problem with Facebook Groups but denied its platforms were more widely affected. 'Outrageous and vile' The BBC has changed the names of the people in this piece to protect their from Aberdeen in Scotland, was suspended from Instagram on 4 June. He was told he had not followed Meta's community standards on child sexual exploitation, abuse and appealed that day, and was then permanently disabled on Instagram and his associated Facebook and Facebook Messenger found a Reddit thread, where many others were posting that they had also been wrongly banned over child sexual exploitation."We have lost years of memories, in my case over 10 years of messages, photos and posts - due to a completely outrageous and vile accusation," he told BBC said Meta was "an embarrassment", with AI-generated replies and templated responses to his questions. He still has no idea why his account was banned."I've lost endless hours of sleep, extreme stress, felt isolated. It's been horrible, not to mention having an accusation like that over my head."Although you can speak to people on Reddit, it is hard to go and speak to a family member or a colleague. They probably don't know the context that there is a ban wave going on."The BBC raised David's case to Meta on 3 July, as one of a number of people who claimed to have been wrongly banned over child sexual exploitation. Within hours, his account was a message sent to David, and seen by the BBC, the tech giant said: "We're sorry that we've got this wrong, and that you weren't able to use Instagram for a while. Sometimes, we need to take action to help keep our community safe.""It is a massive weight off my shoulders," said David. Faisal was banned from Instagram on 6 June over alleged child sexual exploitation and, like David, found his Facebook account suspended too. The student from London is embarking on a career in the creative arts, and was starting to earn money via commissions on his Instagram page when it was suspended. He appealed after feeling he had done nothing wrong, and then his account was then banned a few minutes told BBC News: "I don't know what to do and I'm really upset."[Meta] falsely accuse me of a crime that I have never done, which also damages my mental state and health and it has put me into pure isolation throughout the past month." His case was also raised with Meta by the BBC on 3 July. About five hours later, his accounts were reinstated. He received the exact same email as David, with the apology from told BBC News he was "quite relieved" after hearing the news. "I am trying to limit my time on Instagram now."Faisal said he remained upset over the incident, and is now worried the account ban might come up if any background checks are made on him.A third user Salim told BBC News that he also had accounts falsely banned for child sexual exploitation highlighted his case to journalists, stating that appeals are "largely ignored", business accounts were being affected, and AI was "labelling ordinary people as criminal abusers".Almost a week after he was banned, his Instagram and Facebook accounts were reinstated. What's gone wrong? When asked by BBC News, Meta declined to comment on the cases of David, Faisal, and Salim, and did not answer questions about whether it had a problem with wrongly accusing users of child abuse seems in one part of the world, however, it has acknowledged there is a wider BBC has learned that the chair of the Science, ICT, Broadcasting, and Communications Committee at the National Assembly in South Korea, said last month that Meta had acknowledged the possibility of wrongful suspensions for people in her Carolina Are, a blogger and researcher at Northumbria University into social media moderation, said it was hard to know what the root of the problem was because Meta was not being open about she suggested it could be due to recent changes to the wording of some its community guidelines and an ongoing lack of a workable appeal process."Meta often don't explain what it is that triggered the deletion. We are not privy to what went wrong with the algorithm," she told BBC a previous statement, Meta said: "We take action on accounts that violate our policies, and people can appeal if they think we've made a mistake." Meta, in common with all big technology firms, have come under increased pressure in recent years from regulators and authorities to make their platforms safe told the BBC it used a combination of people and technology to find and remove accounts that broke its rules, and was not aware of a spike in erroneous account says its child sexual exploitation policy relates to children and "non-real depictions with a human likeness", such as art, content generated by AI or fictional also told the BBC a few weeks ago it uses technology to identify potentially suspicious behaviours, such as adult accounts being reported by teen accounts, or adults repeatedly searching for "harmful" states that when it becomes aware of "apparent child exploitation", it reports it to the National Center for Missing and Exploited Children (NCMEC) in the US. NCMEC told BBC News it makes all of those reports available to law enforcement around the world. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.