Scammers using AI to dupe the lonely looking for love
Meta on Wednesday warned internet users to be wary of online acquaintances promising romance but seeking cash as scammers use deep fakes to prey on those looking for love.
"This is a new tool in the toolkit of scammers," Meta global threat disruption policy director David Agranovich told journalists during a briefing.
"These scammers evolve consistently; we have to evolve to keep things right."
Detection systems in Meta's family of apps including Instagram and WhatsApp rely heavily on behavior patterns and technical signals rather than on imagery, meaning it spies scammer activity despite the AI trickery, according to Agranovich.
"It makes our detection and enforcement somewhat more resilient to generative AI," Agranovich said.
He gave the example of a recently disrupted scheme that apparently originated in Cambodia and targeted people in Chinese and Japanese languages.
Researchers at OpenAI determined that the "scam compound" seemed to be using the San Francisco artificial intelligence company's tools to generate and translate content, according to Meta.
Generative AI technology has been around for more than a year, but in recent months its use by scammers has grown strong, "ethical hacker" and SocialProof Security chief executive Rachel Tobac said during the briefing.
GenAI tools available for free from major companies allow scammers to change their faces and voices on video calls as they pretend to be someone they are not.
"They can also use these deep fake bots that allow you to build a persona or place phone calls using a voice clone and a human actually doesn't even need to be involved," Tobac said.
"They call them agents, but they're not being used for customer support work. They're being used for scams in an automated fashion."
Tobac urged people to be "politely paranoid" when an online acquaintance encourages a romantic connection, particularly when it leads to a request for money to deal with a supposed emergency or business opportunity.
- Winter blues -
The isolation and glum spirits that can come with winter weather along with the Valentine's Day holiday is seen as a time of opportunity for scammers.
"We definitely see an influx of scammers preying on that loneliness in the heart of winter," Tobac said.
The scammer's main goal is money, with the tactic of building trust quickly and then contriving a reason for needing cash or personal data that could be used to access financial accounts, according to Tobac.
"Being politely paranoid goes a long way, and verifying people are who they say they are," Tobac said.
Scammers operate across the gamut of social apps, with Meta seeing only a portion of the activity, according to Agranovich.
Last year, Meta took down more than 408,000 accounts from West African countries being used by scammers to pose as military personnel or businessmen to romance people in Australia, Britain, Europe, the United States and elsewhere, according to the tech titan.
Along with taking down nefarious networks, Meta is testing facial recognition technology to check potential online imposters detected by its systems or reported by users.
gc/arp/dw
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
36 minutes ago
- Yahoo
Morgan Stanley Remains a Buy on Salesforce (CRM) With a PT of $404
Salesforce, Inc. (NYSE:CRM) is one of the 13 Best Long Term Growth Stocks to Invest in Right Now. In a report released on June 24, Keith Weiss from Morgan Stanley maintained a Buy rating on Salesforce, Inc. (NYSE:CRM) with a price target of $404.00. The analyst supported the optimistic sentiment with the company's potential for future growth, citing management's clear strategy to expedite revenue growth to low teens. A customer service team in an office setting using the company's Customer 360 platform to communicate with customers. Weiss considers it a notable improvement in regards to the current growth rate, as the strategy covers the optimization of pricing and packaging, improvement of bookings dynamics, and leveraging new innovations. All of these factors are anticipated to allow solid growth for Salesforce, Inc. (NYSE:CRM), according to him. The analyst further reasoned that Salesforce, Inc. (NYSE:CRM) has a promising product portfolio that includes GenAI solutions such as Data Cloud and Agentforce. These solutions are generating positive customer feedback and achieving significant annual recurring revenue, contributing to the company's overall positive outlook. In addition, core products such as Sales & Service Clouds are showing stability, along with momentum in Slack, Mulesoft, and Tableau. According to the analyst, all these factors can provide solid ground to overcome challenges in Marketing and Commerce. Salesforce, Inc. (NYSE:CRM) designs and develops cloud-based enterprise software for customer relationship management. Its solutions encompass customer service and support, sales force automation, digital commerce, marketing automation, collaboration, community management, industry-specific solutions, and salesforce platforms. It also offers training, guidance, support, and advisory services. While we acknowledge the potential of CRM as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
an hour ago
- Business Insider
OpenAI and Microsoft are dueling over AGI. These real-world tests will prove when AI is really better than humans.
AGI is a pretty silly debate. It's only really important in one way: It governs how the world's most important AI partnership will change in the coming months. That's the deal between OpenAI and Microsoft. This is the situation right now: Until OpenAI achieves Artificial General Intelligence — where AI capabilities surpass those of humans — Microsoft gets a lot of valuable technological and financial benefits from the startup. For instance, OpenAI must share a significant portion of its revenue with Microsoft. That's billions of dollars. One could reasonably argue that this might be why Sam Altman bangs on about OpenAI getting close to AGI soon. Many other experts in the AI field don't talk about this much, or they think the AGI debate is off base in various ways, or just not that important. Even Anthropic CEO Dario Amodei, one of the biggest AI boosters on the planet, doesn't like to talk about AGI. Microsoft CEO Satya Nadella sees things very differently. Wouldn't you? If another company is contractually required to give you oodles of money if they don't reach AGI, then you're probably not going to think we're close to AGI! Nadella has called the push toward AGI "benchmark hacking," which is so delicious. This refers to AI researchers and labs designing AI models to perform well on wonky industry benchmarks, rather than in real life. Here's OpenAI's official definition of AGI: "highly autonomous systems that outperform humans at most economically valuable work." Other experts have defined it slightly differently. But the main point is that AI machines and software must be better than humans at a wide variety of useful tasks. You can already train an AI model to be better at one or two specific things, but to get to artificial general intelligence, machines must be able to do many different things better than humans. Please help BI improve our Business, Tech, and Innovation coverage by sharing a bit about your role — it will help us tailor content that matters most to people like you. Continue By providing this information, you agree that Business Insider may use this data to improve your site experience and for targeted advertising. By continuing you agree that you accept the Terms of Service and Privacy Policy . My real-world AGI tests Over the past few months, I've devised several real-world tests to see if we've reached AGI. These are fun or annoying everyday things that should just work in a world of AGI, but they don't right now for me. I also canvassed input from readers of my Tech Memo newsletter and tapped my source network for fun suggestions. Here are my real-world tests that will prove we've reached AGI: The PR departments of OpenAI and Anthropic use their own AI technology to answer every journalist's question. Right now, these companies are hiring a ton of human journalists and other communications experts to handle a barrage of reporter questions about AI and the future. When I reach out to these companies, humans answer every time. Unacceptable! Unless this changes, we're not at AGI. This suggestion is from a hedge fund contact, and I love it: Please, please can my Microsoft Outlook email system stop burying important emails while still letting spam through? This one seems like something Microsoft and OpenAI could solve with their AI technology. I haven't seen a fix yet. In a similar vein, can someone please stop Cactus Warehouse from texting me every 2 days with offers for 20% off succulents? I only bought one cactus from you guys once! Come on, AI, this can surely be solved! My 2024 Tesla Model 3 Performance hits potholes in FSD. No wonder tires have to be replaced so often on these EVs. As a human, I can avoid potholes much better. Elon, the AGI gauntlet has been thrown down. Get on this now. Can AI models and chatbots make valuable predictions about the future, or do they mostly just regurgitate what's already known on the internet? I tested this recently, right after the US bombed Iran. ChatGPT's stock-picking ability was put to the test versus a single human analyst. Check out the results here. TL;DR: We are nowhere near AGI on this one. There's a great Google Gemini TV ad where a kid is helping his dad assemble a basketball net. The son is using an Android phone to ask Gemini for the instructions and pointing the camera at his poor father struggling with parts and tools. It's really impressive to watch as Gemini finds the instruction manual online just by "seeing" what's going on live with the product assembly. For AGI to be here, though, the AI needs to just build the damn net itself. I can sit there and read out instructions in an annoying way, while someone else toils with fiddly assembly tasks — we can all do that. Yes, I know these tests seem a bit silly — but AI benchmarks are not the real world, and they can be pretty easily gamed. That last basketball net test is particularly telling for me. Getting an AI system and software to actually assemble a basketball net — that might happen sometime soon. But, getting the same system to do a lot of other physical-world manipulation stuff better than humans, too? Very hard and probably not possible for a very long time. As OpenAI and Microsoft try to resolve their differences, the companies can tap experts to weigh in on whether the startup has reached AGI or not, per the terms of their existing contract, according to The Information. I'm happy to be an expert advisor here. Sam and Satya, let me know if you want help! For now, I'll leave the final words to a real AI expert. Konstantin Mishchenko, an AI research scientist at Meta, recently tweeted this, while citing a blog by another respected expert in the field, Sergey Levine: "While LLMs learned to mimic intelligence from internet data, they never had to actually live and acquire that intelligence directly. They lack the core algorithm for learning from experience. They need a human to do that work for them," Mishchenko wrote, referring to AI models known as large language models. "This suggests, at least to me, that the gap between LLMs and genuine intelligence might be wider than we think. Despite all the talk about AGI either being already here or coming next year, I can't shake off the feeling it's not possible until we come up with something better than a language model mimicking our own idea of how an AI should look," he concluded.
Yahoo
2 hours ago
- Yahoo
OpenAI boss accuses Meta of trying to poach staff with $100m sign-on bonuses
The boss of OpenAI has claimed that Mark Zuckerberg's Meta has tried to poach his top artificial intelligence experts with 'crazy' signing bonuses of $100m (£74m), as the scramble for talent in the booming sector intensifies. Sam Altman spoke about the offers in a podcast on Tuesday. They have not been confirmed by Meta. OpenAI, the company that developed ChatGPT, said it had nothing to add beyond its chief executive's comments. 'They started making these giant offers to a lot of people on our team – $100m signing bonuses, more than that comp [compensation] per year,' Altman told the Uncapped podcast, which is presented by his brother, Jack. 'It is crazy. I'm really happy that, at least so far, none of our best people have decided to take them up on that.' He said: 'I think the strategy of a tonne of upfront, guaranteed comp, and that being the reason you tell someone to join … the degree to which they're focusing on that, and not the work and not the mission – I don't think that's going to set up a great culture.' Meta last week launched a $15bn drive towards computerised 'super-intelligence' – a type of AI that can perform better than humans at all tasks. The company bought a large stake in the $29bn startup Scale AI, set up by the programmer Alexandr Wang, 28, who joined Meta as part of the deal. Last week, a Silicon Valley venture capitalist, Deedy Das, tweeted: 'The AI talent wars are absolutely ridiculous'. Das, a principal at Menlo Ventures, said Meta had been losing AI candidates to rivals despite offering $2m-a-year salaries. Another report last month found that Anthropic, an AI company backed by Amazon and Google and set up by engineers who left Altman's company was 'siphoning top talent from two of its biggest rivals: OpenAI and DeepMind'. The scramble to recruit the best developers comes amid rapid advances in AI technology and a race to achieve human-level AI capacity – known as artificial general intelligence. The spending on hardware is greater still, with recent estimates from the Carlyle Group, reported by Bloomberg, that $1.8tn could be spent on computing power by 2030. That is more than the annual gross domestic product of Australia. Some tech firms are buying whole companies to lock in top talent, as seen in part with Meta's Scale AI deal and Google spending $2.7bn last year on which was founded by the leading AI researcher Noam Shazeer. He co-wrote the 2017 research paper Attention is all you Need, which is considered a seminal contribution to the current wave of large language model AI systems. While Meta was founded as a social media company and OpenAI as non-profit – becoming a for-profit business last year – the two are now rivals. Altman told his brother's podcast that he did not feel Meta would succeed in it's AI push, adding: 'I don't think they're a company that's great at innovation.' He said he had once heard Zuckerberg say that it had seemed rational for Google to try to develop a social media function in the early days of Facebook, but 'it was clear to people at Facebook that that was not going to work'. 'I feel a little bit similar here,' Altman added. Despite the huge investments in the sector, Altman suggested the result could be 'we build legitimate super intelligence, and it doesn't make the world much better [and] doesn't change things as much as it sounds like it should'. 'The fact that you can have this thing do this amazing stuff for you, and you kind of live your life the same way you did two years ago,' he said. 'The thing that I think will be the most impactful in that five to 10-year timeframe is AI will actually discover new science. This is a crazy claim to make, but I think it is true, and if it is correct, then over time I think that will dwarf everything else [AI has achieved].'