logo
Inside China's disturbing sex doll factories building thousands of hyper-realistic robots - including child-sized dolls which are flooding America

Inside China's disturbing sex doll factories building thousands of hyper-realistic robots - including child-sized dolls which are flooding America

Daily Mail​31-05-2025
Chinese sex doll factories have reported booming business thanks to the installation of AI chatbots, with creepy pictures showing the realistic toys being manufactured.
Disturbing images lay bare how the process of mass producing realistic sex bots has progressed over recent years, with basic silicone dolls seen next to anthropomorphic robots.
WMDoll, one of China 's biggest sex doll makers, has said it is expecting a record 30 per cent jump in sales this year thanks to its adoption of generative AI tools like ChatGPT.
'It makes the dolls more responsive and interactive, which offers users a better experience,' the company's founder and chief Liu Jiangxia told the South China Morning Post.
Unlike traditional sex dolls, those installed with AI capabilities are designed to speak back to the user.
WMDoll says it can make dolls with about eight different 'personalities' to choose from, which are capable of continuing a conversation started a few days earlier.
The company fits its dolls with an AI tool which is designed to pander to its partner's ego and which can be programmed to ask questions about their 'relationship' and about the user's feelings.
'In the past, these dolls' primary function was to satisfy users' sexual needs,' Jiangxa said. 'But as their physical features such as head and joint movements and skin became more realistic, our customers started to seek emotional companionship in the dolls.'
She said that was when the firm decided to introduce large language models into its products, allowing the dolls to 'react to users verbally'.
The company started used AI in its dolls in 2016 and the technology has been improving thanks to open source AI, which has helped to make them cheaper.
Dolls are often made with thermoplastic which is heated to 37C to reflect a human's body temperature and developers say these perverse dolls have body sensors that also make them feel human-like.
Another manufacturer, Shenzhen Atall Intelligent Robot Technology based in Shenzhen, Guangdong Province, previously said that most of its clients are 40 to 50-year-old men that come from Europe and the US.
Users can order custom-made AI dolls which cost around $3,000 (£2,000) each and have soft and elastic skin made from rubbery plastic that contain less oil content than normal plastics and do not smell.
US customers are said to like dolls with darker skin and large breasts, buttocks and genitals while Chinese customers go for for Asian features with small, hairless genitals, the company said.
Most shockingly, the firm is selling child-size AI sex robots, both male and female, and the biggest market for them is reportedly in the US and Canada.
However, the perverse fetish for child-like dolls is nothing new, with police raids around the globe in recent years leading to dolls being seized and perverted users arrested.
The robots' bodies are made from modified thermoplastic elastomers (TBE) with a metal skeleton and they are around half the weight of normal human beings.
The company says they have anti-electric shock, anti-fire and anti-explosion measures.
Pictured is a user holding a hand of his 'smart' sex doll as he sits on a sofa in his home in Guangzhou
The robot's eyes, lips and head move and they speak English and Chinese, developers say.
Flexible joints mean the dolls can be positioned in a variety of poses for display as well as sexual acts.
The company also produces custom-made AI sex robots which sell for $9,400 (£7,000).
So far two have been ordered by men who wanted them based on the image of their wife who had passed away.
Around 70 per cent of these customers also ask for hair on the dolls' genital area.
Male sex robots are also sold but they are nine times less popular than female robots, according to manufacturers.
On Chinese social media, some say the products reinforce sexist stereotypes or endorse paedophilia.
'When sex robots become more technologically advanced, will men prefer to use them instead of respecting human wives?' one commenter on the Twitter-like Weibo platform wrote.
Meanwhile others, calling themselves 'friends of dolls', share user reviews and advice on dedicated online forums.
'The material is quite good, very soft to the touch. When I hold her I feel very comfortable,' one anonymous user said in a review of a standard sex doll on e-commerce platform Taobao.
China has previously been estimated to make more than 80 percent of the world's sex toys, with over a million people employed in the country's $6.6 billion industry.
Prominent Chinese feminist Xiao Meili thinks that some men will always have outdated expectations and 'sex housewife robots' might actually help women.
'A lot of men want the same for women: sex, housework, childbirth and filial piety. They don't think of women as individuals,' Xiao told AFP.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Easily Install Any AI Model Locally on Your PC Using Open WebUI
Easily Install Any AI Model Locally on Your PC Using Open WebUI

Geeky Gadgets

timean hour ago

  • Geeky Gadgets

Easily Install Any AI Model Locally on Your PC Using Open WebUI

Have you ever wondered how to harness the power of advanced AI models on your home or work Mac or PC without relying on external servers or cloud-based solutions? For many, the idea of running large language models (LLMs) locally has long been synonymous with complex setups, endless dependencies, and high-end hardware requirements. But what if we told you there's now a way to bypass all that hassle? Enter Docker Model Runner—an innovative tool that makes deploying LLMs on your local machine not only possible but surprisingly straightforward. Whether you're a seasoned developer or just starting to explore AI, this tool offers a privacy-first, GPU-free solution that's as practical as it is powerful. In this step-by-step overview, World of AI show you how to install and run any AI model locally using Docker Model Runner and Open WebUI. You'll discover how to skip the headaches of GPU configurations, use seamless Docker integration, and manage your models through an intuitive interface—all while keeping your data secure on your own machine. Along the way, we'll explore the unique benefits of this approach, from its developer-friendly design to its scalability for both personal projects and production environments. By the end, you'll see why WorldofAI calls this the easiest way to unlock the potential of local AI deployment. So, what does it take to bring innovative AI right to your desktop? Let's find out. Docker Model Runner Overview Why Choose Docker Model Runner for LLM Deployment? Docker Model Runner is specifically designed to simplify the traditionally complex process of deploying LLMs locally. Unlike conventional methods that often require intricate GPU configurations or external dependencies, Docker Model Runner eliminates these challenges. Here are the key reasons it stands out: No GPU Setup Required: Avoid the complexities of configuring CUDA or GPU drivers, making it accessible to a broader range of developers. Avoid the complexities of configuring CUDA or GPU drivers, making it accessible to a broader range of developers. Privacy-Centric Design: All models run entirely on your local machine, making sure data security and privacy for sensitive applications. All models run entirely on your local machine, making sure data security and privacy for sensitive applications. Seamless Docker Integration: Fully compatible with existing Docker workflows, supporting OpenAI API compatibility and OCI-based modular packaging for enhanced flexibility. These features make Docker Model Runner an ideal choice for developers of all experience levels, offering a balance of simplicity, security, and scalability. How to Access and Install Models Docker Model Runner supports a wide array of pre-trained models available on popular repositories such as Docker Hub and Hugging Face. The installation process is designed to be straightforward and adaptable to various use cases: Search for the desired model on Docker Hub or Hugging Face to find the most suitable option for your project. Pull the selected model using Docker Desktop or terminal commands for quick and efficient installation. Use OCI-based packaging to customize and control the deployment process, tailoring it to your specific requirements. This modular approach ensures flexibility, allowing developers to experiment with AI models or deploy them in production environments with ease. How to Install Any LLM Locally Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on local AI. System Requirements and Compatibility Docker Model Runner is designed to work seamlessly across major operating systems, including Windows, macOS, and Linux. Before beginning, ensure your system meets the following basic requirements: Docker Desktop: Ensure Docker Desktop is installed and properly configured on your machine. Ensure Docker Desktop is installed and properly configured on your machine. Hardware Specifications: Verify that your system has sufficient RAM and storage capacity to handle the selected LLMs effectively. These minimal prerequisites make Docker Model Runner accessible to a wide range of developers, regardless of their hardware setup, making sure a smooth and efficient deployment process. Enhancing Usability with Open WebUI To further enhance the user experience, Docker Model Runner integrates with Open WebUI, a user-friendly interface designed for managing and interacting with models. Open WebUI offers several notable features that simplify the deployment and management process: Self-Hosting Capabilities: Run the interface locally, giving you full control over your deployment environment. Run the interface locally, giving you full control over your deployment environment. Built-In Inference Engines: Execute models without requiring additional configurations, reducing setup time and complexity. Execute models without requiring additional configurations, reducing setup time and complexity. Privacy-Focused Deployments: Keep all data and computations on your local machine, making sure maximum security for sensitive projects. Configuring Open WebUI is straightforward, often requiring only a Docker Compose file to manage settings and workflows. This integration is particularly beneficial for developers who prioritize customization and ease of use in their AI projects. Step-by-Step Guide to Deploying LLMs Locally Getting started with Docker Model Runner is a simple process. Follow these steps to deploy large language models on your local machine: Enable Docker Model Runner through the settings menu in Docker Desktop. Search for and install your desired models using Docker Desktop or terminal commands. Launch Open WebUI to interact with and manage your models efficiently. This step-by-step approach minimizes setup time, allowing you to focus on using the capabilities of AI rather than troubleshooting technical issues. Key Features and Benefits Docker Model Runner offers a range of features that make it a standout solution for deploying LLMs locally. These features are designed to cater to both individual developers and teams working on large-scale projects: Integration with Docker Workflows: Developers familiar with Docker will find the learning curve minimal, as the tool integrates seamlessly with existing workflows. Developers familiar with Docker will find the learning curve minimal, as the tool integrates seamlessly with existing workflows. Flexible Runtime Pairing: Choose from a variety of runtimes and inference engines to optimize performance for your specific use case. Choose from a variety of runtimes and inference engines to optimize performance for your specific use case. Scalability: Suitable for both small-scale experiments and large-scale production environments, making it a versatile tool for various applications. Suitable for both small-scale experiments and large-scale production environments, making it a versatile tool for various applications. Enhanced Privacy: Keep all data and computations local, making sure security and compliance for sensitive projects. These advantages position Docker Model Runner as a powerful and practical tool for developers seeking efficient, private, and scalable AI deployment solutions. Unlocking the Potential of Local AI Deployment Docker Model Runner transforms the process of deploying and running large language models locally, making advanced AI capabilities more accessible and manageable. By integrating seamlessly with Docker Desktop and offering compatibility with Open WebUI, it provides a user-friendly, scalable, and secure solution for AI deployment. Whether you are working on a personal project or a production-level application, Docker Model Runner equips you with the tools to harness the power of LLMs effectively and efficiently. Media Credit: WorldofAI Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more
Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more

The Guardian

time2 hours ago

  • The Guardian

Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more

The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing $155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips. Google said during its most recent earnings call that its capital expenditure 'primarily reflects investments in servers and data centers to support AI'. Meta's year-to-date capital expenditure amounted to $30.7bn, doubling the $15.2bn figure from the same time last year, per its earnings report. For the most recent quarter alone, the company spent $17bn on capital expenditures, also double the same period in 2024, $8.5bn. Alphabet reported nearly $40bn in capex to date for the first two quarters of the current fiscal year, and Amazon reported $55.7bn. Microsoft said it would spend more than $30bn in the current quarter to build out the data centers powering its AI services. Microsoft CFO Amy Hood said the current quarter's capex would be at least 50% more than the outlay during the same period a year earlier and greater than the company's record capital expenditures of $24.2bn in the quarter to June. 'We will continue to invest against the expansive opportunity ahead,' Hood said. For the coming fiscal year, big tech's total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said Wednesday. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal. The multibillion-dollar figures represent mammoth investments, which the Journal points out is larger than the European Union's quarterly spending on defense. However, the tech giants can't seem to spend enough for their investors. Microsoft, Google and Meta informed Wall Street analysts last quarter that their total capex would be higher than previously estimated. In the case of all three companies, investors were thrilled, and shares in each company soared after their respective earnings calls. Microsoft's market capitalization hit $4tn the day after its report. Even Apple, the cagiest of the tech giants, signaled that it would boost its spending on AI in the coming year by a major amount, either via internal investments or acquisitions. The company's quarterly capex rose to $3.46bn, up from $2.15bn during the same period last year. The iPhone maker reported blockbuster earnings Thursday, with rebounding iPhone sales and better-than-expected business in China, but it is still seen as lagging farthest behind on development and deployment of AI products among the tech giants. Tim Cook, Apple's CEO, said Thursday that the company was reallocating a 'fair number' of employees to focus on artificial intelligence and that the 'heart of our AI strategy' is to increase investments and 'embed' AI across all of its devices and platforms. Cook refrained from disclosing exactly how much Apple is spending, however. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'We are significantly growing our investment, I'm not putting specific numbers behind that,' he said. Smaller players are trying to keep up with the incumbents' massive spending and capitalize on the gold rush. OpenAI announced at the end of the week of earnings that it had raised $8.3bn in investment, part of a planned $40bn round of funding, valuing the startup, whose ChatGPT chatbot kicked in 2022, at $300bn.

Google has signalled the death of googling. What comes next?
Google has signalled the death of googling. What comes next?

Times

time2 hours ago

  • Times

Google has signalled the death of googling. What comes next?

For a quarter of a century, the front page of the internet hasn't changed: a spare, white screen with a rectangular search box beneath a single word: Google. Whether for practical purposes — getting the best price for a flight to Sardinia, finding a new frying pan — or to settle an argument, such as who is the best tennis player of all time — Google is the first port of call. The company that began as a student project by two Stanford computer science students became a pillar of modern life, a utility, like running water. It also turned into perhaps the best business the world has ever seen. Google handles nine of every ten searches on the web and, according to Forbes, its co-founders, Larry Page and Sergey Brin, are worth $144 billion and $138 billion respectively. Last year Alphabet, Google's parent company, brought in $350 billion in sales — more than the gross domestic product of 140 countries, thanks mostly to cash from advertisers who pay to jam their products high up into the results of the 8.5 billion searches that Google runs every day. It is the most elegant of machines. And yet, last month Google rolled out something that gave a glimpse of a very different Google, a new way to search and, long term, potentially an entirely new internet — AI Mode. This is a quantum leap that threatens to radically upend the landscape of the internet. 'Google is disrupting itself,' said Laurence O'Toole, chief executive of Authoritas, a specialist AI search consultancy. The change, he added, was 'seismic'. It arrived for British users on Wednesday, a form of search that allows people to ask any question to what looks like a chatbot and receive a detailed answer — not just a list of blue hyperlinks. And the more complex or detailed the query, the better. Google said that the new version handles queries that 'would have previously required multiple searches'. In short, it invites you almost to talk to Google. No need to trim or fine-tune your question: just blather on and let Google figure it out. On a desktop, users can press the AI Mode button in the search bar, on a mobile, one can enable AI Mode from the list at the top of the tabs, then tap the microphone button and start talking to get across your query, 'as messy or complicated as it may be', said Google. An example. Why, I ask, has Nigel Farage become so popular? Once I hit 'enter', AI Mode goes into 'thinking' mode. It whirrs into action, scanning through webpages in their hundreds in a fraction of a second, according to its 'sites' counter. Then, it spits out a summary. Farage, the answer read, had, 'tapped into a sense of frustration and disillusionment with the mainstream political establishment in the UK', before offering six bullet points to flesh out its point, from his 'strong stance on immigration' to his 'social media savvy' and shifting demographics. The answers often included small icons, indicating links to third-party sites that were prime sources of AI Mode's musings. But what it did not do was give that familiar list of links to click on. For two decades now we have been a slave to the search engine algorithm — but potentially no more. This is important because those links are not only the fuel of the Google machine, they are the architecture of the web we have come to know and been conditioned to expect. Companies pay top dollar to show up there, and critically, above their rivals. In America, land of the class-action lawsuit, lawyers want to make sure they are first in line when someone goes googling. One of the most expensive keyword search terms is 'mesothelioma attorney', coming in at $236 per click, according to PPC, an online ad specialist. Lawsuits over mesothelioma, a form of cancer attributed to asbestos exposure, have generated tens of billions of dollars in payouts. The cheapest search terms — those deemed least likely to lead to a purchase, such as 'can dogs smell fear?' — go for as little as five cents. The beauty of the great Google ad machine, however, is that no matter what you look for, clicks turn into cash, and that cash flows into Google's coffers. Which leads to the basic question: why on earth would Google ever want to toy with killing its golden goose? The answer is simply ChatGPT. It was early 2023 when Sundar Pichai, Google's long-serving chief executive, issued a 'code red'. OpenAI had just released ChatGPT, a new chatbot that was wildly powerful, capable of passing standardised tests, telling jokes and answering virtually any question thrown its way. To the internet-using public, it felt like magic. To Google, it felt like a threat. 'Google has been the greatest business in the history of capitalism for 20 years because they owned the consumer. They owned the verb,' said Brad Gerstner, a renowned tech investor, last week. 'The first real threat in 20 years came about in the ChatGPT moment, and it's continued to accelerate.' Put another way, the arrival of ChatGPT, and its many rivals, offer an entirely new way to interact with technology. Ask a question, get an answer, as opposed to a mix of ads and website links that one must then navigate to track down the right product or answer. Critics have long railed against Google, seeing it as little more than a reconstituted, global Yellow Pages with search results shoved beneath a long list of ads. AI bots such as ChatGPT offer a wholly different experience, which for many is simply better. From a standing start less than three years ago, ChatGPT is now used by more than 700 million people every week, and handles a billion searches for them. People — especially younger ones — are striking up relationships with chatbots, sharing their deepest, darkest secrets and seeking advice. It is almost intimate. So, of course, they are also going to their trusted bots for product recommendations and research queries — and it threatens to erode Google's business. Google's AI Mode is the result of necessity and signposts the way the internet is heading. There are still limitations. It can't yet make a dinner reservation or book you a flight. But it can tell you what airlines fly direct to Barcelona, and remember an admonition to never show results with a certain airline — ahem — with which I have had several bad experiences. It will also suggest a weekend itinerary. In response to a request for a weekend in Barcelona 'off the beaten track', it suggested good walking neighbourhoods, such as Gràcia, and olive-oil tasting in Sant Antoni, with links to sites that offer more information or direct booking. That last step in the process — doing things for you — is not far off. Google, OpenAI and the rest of the industry are working feverishly on 'agents', bots that, for example, have your credit card details and will autonomously carry out tasks, from booking travel to taking charge of everyday drudgery, such as ordering toilet paper and toothpaste. In this not-to-distant future, where everyone has a digital butler and their human masters are abstracted away from many purchasing decisions, advertising will probably look very different. The marketing plans and brand strategies businesses tailored to the Google search algorithm? Trashed. There are early signs of the changes afoot. Last year Google rolled out AI Overviews. These are less detailed summaries than those offered by AI Mode, and sit atop the familiar cascade of search results. They are disrupting the industry, with some websites reporting as much as a 50 per cent drop in traffic when Google results include AI overviews. Google disputes this: 'We continue to send billions of clicks to websites every day, and we have not seen dramatic drops in aggregate web traffic as is being suggested.' But Google's changes are, in a way, preparation for a much more profound technological shift: the death of the smartphone. When highly capable bots can understand and respond to spoken queries, is spending hours peering into a little black mirror in one's palm really the pinnacle of technology? Or will we, in the not too distant future, look back on how we use technology today as an almost cro-magnon existence, pecking at a little glass rectangle, grunting, smiling and crying at what it yields? This is the bet of Sir Jony Ive, the celebrated British industrial designer behind the iPhone and iPod. He recently joined forces with OpenAI to design a novel, AI-centred device. Ive has kept schtum as to what it will look like, but there is speculation that it will be a screenless, potentially wearable device (a necklace or lapel pin, perhaps) with a camera and a microphone that can see what we see, hear what we hear, and respond in natural language to anything a wearer might ask. 'Jony recently gave me one of the prototypes of the device for the first time to take home, and I've been able to live with it,' said OpenAI's co-founder Sam Altman. 'And I think it is the coolest piece of technology that the world will have ever seen.' In that context, where screens take a back seat, where ambient AI's become the norm, Google's AI Mode starts to make more sense. A static list of blue links? That is so 2024. AI Mode is just the beginning.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store