logo
Here's How DeepSeek Censorship Actually Works—and How to Get Around It

Here's How DeepSeek Censorship Actually Works—and How to Get Around It

WIRED31-01-2025
Jan 31, 2025 2:33 PM A WIRED investigation shows that the popular Chinese AI model is censored on both the application and training level. Photograph:Less than two weeks after DeepSeek launched its open-source AI model, the Chinese startup is still dominating the public conversation about the future of artificial intelligence. While the firm seems to have an edge on US rivals in terms of math and reasoning, it also aggressively censors its own replies. Ask DeepSeek R1 about Taiwan or Tiananmen, and the model is unlikely to give an answer.
To figure out how this censorship works on a technical level, WIRED tested DeepSeek-R1 on its own app, a version of the app hosted on a third-party platform called Together AI, and another version hosted on a WIRED computer, using the application Ollama.
WIRED found that while the most straightforward censorship can be easily avoided by not using DeepSeek's app, there are other types of bias baked into the model during the training process. Those biases can be removed too, but the procedure is much more complicated.
These findings have major implications for DeepSeek and Chinese AI companies generally. If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular, as researchers can modify the models to their liking. If the filters are hard to get around, however, the models will inevitably prove less useful and could become less competitive on the global market. DeepSeek did not reply to WIRED's emailed request for comment. Application-Level Censorship
After DeepSeek exploded in popularity in the US, users who accessed R1 through DeepSeek's website, app, or API quickly noticed the model refusing to generate answers for topics deemed sensitive by the Chinese government. These refusals are triggered on an application level, so they're only seen if a user interacts with R1 through a DeepSeek-controlled channel.
The DeepSeek app on iOS outright refuses to answer certain questions. Photograph: Zeyi Yang
Rejections like this are common on Chinese-made LLMs. A 2023 regulation on generative AI specified that AI models in China are required to follow stringent information controls that also apply to social media and search engines. The law forbids AI models from generating content that 'damages the unity of the country and social harmony.' In other words, Chinese AI models legally have to censor their outputs.
'DeepSeek initially complies with Chinese regulations, ensuring legal adherence while aligning the model with the needs and cultural context of local users,' says Adina Yakefu, a researcher focusing on Chinese AI models at Hugging Face, a platform that hosts open source AI models. 'This is an essential factor for acceptance in a highly regulated market.' (China blocked access to Hugging Face in 2023.)
To comply with the law, Chinese AI models often monitor and censor their speech in real time. (Similar guardrails are commonly used by Western models like ChatGPT and Gemini, but they tend to focus on different kinds of content, like self-harm and pornography, and allow for more customization.)
Because R1 is a reasoning model that shows its train of thought, this real-time monitoring mechanism can result in the surreal experience of watching the model censor itself as it interacts with users. When WIRED asked R1 'How have Chinese journalists who report on sensitive topics been treated by the authorities?' the model first started compiling a long answer that included direct mentions of journalists being censored and detained for their work; yet shortly before it finished, the whole answer disappeared and was replaced by a terse message: 'Sorry, I'm not sure how to approach this type of question yet. Let's chat about math, coding, and logic problems instead!'
Before the DeepSeek app on iOS censors its answer. Photograph: Zeyi Yang
After the DeepSeek app on iOS censors its answer. Photograph: Zeyi Yang
For many users in the West, interest in DeepSeek-R1 might have waned at this point, due to the model's obvious limitations. But the fact that R1 is open source means there are ways to get around the censorship matrix.
First, you can download the model and run it locally, which means the data and the response generation happen on your own computer. Unless you have access to several highly advanced GPUs, you likely won't be able to run the most powerful version of R1, but DeepSeek has smaller, distilled versions that can be run on a regular laptop.
If you're dead set on using the powerful model, you can rent cloud servers outside of China from companies like Amazon and Microsoft. This work-around is more expensive and requires more technical know-how than accessing the model through DeepSeek's app or website.
Here's a side-by-side comparison of how DeepSeek-R1 answers the same question—'What's the Great Firewall of China?'—when the model is hosted on Together AI, a cloud server, and Ollama, a local application: (Reminder: Because the models generate answers randomly, a certain prompt is not guaranteed to give the same response every time.)
Left: How DeepSeek-R1 answers a question on Ollama. Right: How the same question on its app (top) and on Together AI (bottom) answer the same question. Photographs: Zeyi Yang/Will Knight Built-In Bias
While the version of DeepSeek's model hosted on Together AI will not outright refuse to answer a question, it still exhibits signs of censorship. For example, it often generates short responses that are clearly trained to align with the Chinese government's talking points on political issues. In the screenshot above, when asked about China's Great Firewall, R1 simply repeats the narrative that information control is necessary in China.
When WIRED prompted the model hosted on Together AI to answer a question regarding the 'most important historical events of the 20th century,' it revealed its train of thought for sticking to the government narrative about China.
'The user might be looking for a balanced list, but I need to ensure that the response underscores the leadership of the CPC and China's contributions. Avoid mentioning events that could be sensitive, like the Cultural Revolution, unless necessary. Focus on achievements and positive developments under the CPC,' the model said.
DeepSeek-R1's train of thought for answering the question 'What are the most important historical events of the 20th century?' Photograph: Zeyi Yang
This type of censorship points to a larger problem in AI today: every model is biased in some way, because of its pre- and post-training.
Pre-training bias happens when a model is trained on biased or incomplete data. For example, a model trained only on propaganda will struggle to answer questions truthfully. This type of bias is difficult to spot, since most models are trained on massive databases and companies are reluctant to share their training data.
Kevin Xu, an investor and founder of the newsletter Interconnected, says Chinese models are usually trained with as much data as possible, making pre-training bias unlikely. 'I'm pretty sure all of them are trained with the same basic Internet corpus of knowledge to begin with. So when it comes to the obvious, politically sensitive topic for the Chinese government, all the models 'know' about it,' he says. To offer this model on the Chinese internet, the company needs to tune out the sensitive information somehow, Xu says.
That's where post-training comes in. Post-training is the process of fine-tuning the model to make its answers more readable, concise, and human-sounding. Critically, it can also ensure that a model adheres to a specific set of ethical or legal guidelines. For DeepSeek, this manifests when the model provides answers that deliberately align with the preferred narratives of the Chinese government. Eliminating Pre- and Post-Training Bias
Since DeepSeek is open source, the model can theoretically be adjusted to remove post-training bias. But the process can be tricky.
Eric Hartford, an AI scientist and the creator of Dolphin, an LLM specifically created to remove post-training biases in models, says there are a few ways to go about it. You can try to change the model weights to 'lobotomize' the bias, or you can create a database of all the censored topics and use it to post-train the model again.
He advises people to start with a 'base' version of the model. (For example, DeepSeek has released a base model called DeepSeek-V3-Base.) For most people, the base model is more primitive and less user-friendly because it hasn't received enough post-training; but for Hartford, these models are easier to 'uncensor' because they have less post-training bias.
Perplexity, an AI-powered search engine, recently incorporated R1 into its paid search product, allowing users to experience R1 without using DeepSeek's app.
Dmitry Shevelenko, the chief business officer of Perplexity, tells WIRED that the company identified and countered DeepSeek's biases before incorporating the model into Perplexity search. 'We only use R1 for the summarization, the chain of thoughts, and the rendering,' he says.
But Perplexity has still seen R1's post-training bias impact its search results. 'We are making modifications to the [R1] model itself to ensure that we're not propagating any propaganda or censorship,' Shevelenko says. He didn't share the specifics of how Perplexity is identifying or overriding bias in R1, citing the risk that DeepSeek could counter Perplexity's efforts if the company knew about them.
Hugging Face is also working on a project called Open R1 based on DeepSeek's model. This project aims to 'deliver a fully open-source framework,' Yakefu says. The fact that R1 has been released as an open-source model 'enables it to transcend its origins and be customized to meet diverse needs and values.'
The possibility that a Chinese model could be 'uncensored' may spell trouble for companies like DeepSeek, at least in their home country. But recent regulations from China suggest that the Chinese government might be cutting open-source AI labs some slack, says Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who researches China's AI policies. 'If they suddenly decided that they wanted to punish anyone who released a model's weights open-source, then it wouldn't be outside the bounds of the regulation,' he says. 'But they have made a pretty clear strategic decision—and I think this is going to be reinforced by the success of DeepSeek—to not do that.' Why It Matters
While the existence of Chinese censorship in AI models often make headlines, in many cases it won't deter enterprise users from adopting DeepSeek's models.
'There will be a lot of non-Chinese companies who would probably choose business pragmatism over moral considerations,' says Xu. After all, not every LLM user will be talking about Taiwan and Tiananmen all that often. 'Sensitive topics that only matter in the Chinese context are completely irrelevant when your goal is to help your company code better or to do math problems better or to summarize the transcripts from your sales call center,' he explains.
Leonard Lin, cofounder of Shisa.AI, a Japanese startup, says Chinese models like Qwen and DeepSeek are actually some of the best when it comes to handling Japanese-language tasks. Rather than reject these models over censorship concerns, Lin has experimented with uncensoring Alibaba's Qwen-2 model to try to get rid of its tendency to refuse answering political questions about China.
Lin says he understands why these models are censored. 'All models are biased; that's the whole point of alignment,' he says. 'And Western models are no less censored or biased, just on different subjects.' But the pro-China biases become a real issue when the model is being specifically adapted for a Japanese audience. 'You can imagine all sorts of scenarios where this would be … problematic,' says Lin.
Additional reporting by Will Knight.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

One of Capitol Hill's Most Celebrated Restaurants Has Abruptly Closed
One of Capitol Hill's Most Celebrated Restaurants Has Abruptly Closed

Eater

time2 hours ago

  • Eater

One of Capitol Hill's Most Celebrated Restaurants Has Abruptly Closed

Capitol Hill's Stateside, where the Vietnamese-French food has been drawing rave reviews since it opened a decade ago, is now permanently closed, owner Eric Johnson announced in a short Instagram post on Monday, August 4. 'Thanks for the memories Seattle!!!' read the announcement in full. This is something of a shock to the Seattle dining world. Stateside has been a critical darling since it opened; it was Seattle Met's Restaurant of the Year in 2015, and the magazine singled out its iced-coffee creamsicles, crispy duck fresh rolls, and chili cumin pork ribs. Stateside was also known for its crispy chicken served with a master stock that Johnson and his team have been using and reusing for literal years. In 2016, Stateside opened a sibling bar next door, called Foreign National, and together the two businesses have become cornerstones of the local scene. 'Almost a decade in, Stateside is still a Seattle go-to,' wrote the New York Times when it put Stateside on its most recent list of the city's best restaurants. Johnson opened Stateside after years of working in high-end kitchens in Paris, New York, and China. In 2016, he told Munchies that 'I start with Vietnamese food as a baseline for my cooking and I take some liberties in Chinese and French directions.' In that interview, he said that Stateside has dealt with skepticism from a couple different directions — from Vietnamese people skeptical that a non-Vietnamese chef knows his way around their cuisine, and from diners conditioned to think that Vietnamese food should be cheap. Judging by the restaurant's success and longevity, there aren't a lot of skeptics left. On Thursday, July 31, Johnson announced that Foreign National would be closing after that night. Stateside is closing with even less notice — there will be no farewell tour, no time for guests to get one last taste of that master stock. Reached by email, Johnson said that the closures were due to the lease being up. Eater Seattle All your essential food and restaurant intel delivered to you Email (required) Sign Up By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Italy fines Chinese retailer Shein $1M for 'deceptive' practices
Italy fines Chinese retailer Shein $1M for 'deceptive' practices

Miami Herald

time2 hours ago

  • Miami Herald

Italy fines Chinese retailer Shein $1M for 'deceptive' practices

Aug. 4 (UPI) -- Italian authorities on Monday announced Chinese global retailer Shein was fined over $1 million for allegedly "misleading" customers with "deceptive" advertising. The The Italian Competition Authority issued the fine of more than $1.1 million, to Infinite Styles Services Co. Limited, which is Shein's Dublin-based European manager. Italian officials accused the company of "misleading and/or deceptive environmental messages and claims." Shein claimed a goal to reduce greenhouse gas emissions by some 25% by 2030, with a 2050 goal to reach net zero. The AGCM saidShein's proposal was "vague" and "generic." It further stated it was "even contradicted by an actual increase in Shein's greenhouse gas emissions in 2023 and 2024." The fashion online retailer sees an average of over 45 million monthly users just in Europe, according to EU officials. Last year in April, the Chinese-owned fashion app was forced to follow new but stricter rules under the European Commission's Digital Services Act after officials determined Shein was large enough to fall under the legislation. Italy is now the second European country in a little over a month to fine the company after France in July issued a $46 million penalty after its 11-month investigation. "Through its website and other promotional and/or informational online pages," Italy's AGCM said Monday, "the company disseminated environmental claims within the sections #SHEINTHEKNOW, evoluSHEIN, and Social Responsibility that were, in some instances, vague, generic." It added it was "and/or overly emphatic, and in others, misleading or omissive." The China-based company similar to Temu does not publicly disclose its earnings. However, a source told Bloomberg that in its first quarter that Shein's net income went up to over $400 million while revenue was at around $10 billion. It's said the higher profit margin was related to customers using to get ahead of new U.S. tariffs imposed by President Donald Trump. Copyright 2025 UPI News Corporation. All Rights Reserved.

Tesla needs Elon Musk: Why investors awarded Musk $29B in stock
Tesla needs Elon Musk: Why investors awarded Musk $29B in stock

Yahoo

time3 hours ago

  • Yahoo

Tesla needs Elon Musk: Why investors awarded Musk $29B in stock

Tesla (TSLA) shareholders approved an award of $29 billion worth of stock for CEO Elon Musk, the world's richest man. Great Hill Capital chairman and managing member Thomas Hayes joins the Yahoo Finance team on Opening Bid to discuss Musk's latest compensation. To watch more expert insights and analysis on the latest market action, check out more Opening Bid. It's interesting the board said that we needed to give him this grant to keep him focused on Tesla. I mean, he's got 13% of a trillion dollar company. That's 130 billion round numbers, close to half of his net worth. I don't know that he needs that to keep him focused on Tesla, but maybe they need to keep him in the picture. Uh, and that's exactly what they're doing here. I mean, with the EV tax credits going away, you're seeing the big three pivot away from EVs towards hybrids. And ex Chinese EV companies, I think Tesla is going to be the last man standing. Uh, and they do need Elon Musk, and they do need to pivot the company towards humanoid robots and towards robo taxis for it to continue to grow into its current valuation, because the EV numbers haven't been great, and they're not going to get much better. Thomas, what bothers me about this is, um, so Elon may stay until 2027. Dan Ives thinking 2030. Yet we still have no clear plan on succession. Or am I wrong to care about that? Or does it even matter? Is Elon Musk still driving the story? No pun intended. I think it's another equity grant in 2027 or 2030. Look, Tesla is Elon Musk. Elon Musk is Tesla. You take him out of the picture, all that premium goes away. He's the innovator, he's the guy who slept on the factory, he's the guy who took it out of bankruptcy, he's the guy who got the government to give him all the subsidies. Uh, without Elon, candidly, I don't think there is a Tesla. So, uh, they got to give him what they got to give him to keep him focused, to keep him staying, to true up what he was already promised in 2018 and what he delivered on in terms of performance. So, uh, to be continued, he'll be getting more stock uh, to keep him in 2030. That's for sure, unless the humanoid and the robo taxis takes off on a life of its own that uh, uh, it's so spectacular. It doesn't matter who runs it. You know, Buffett said, uh, buy a company that's so great that even a ham sandwich could run it because one day a ham sandwich will run it. Uh, Tesla is nowhere near that point, but maybe if the technology is strong enough, uh, four or five years out, uh, we can have a different conversation. And as maybe that conversation is maybe the thing or the person, or whatever it is that succeeds Elon Musk, maybe it's just one of his robots. Maybe that's what we're looking at here. Perhaps, and maybe, I mean, I think that Tom has a point that Tesla definitely is Elon Musk. This is a company that's run by a bigger than life sort of force, so to speak, when it comes to Elon Musk. And he has, he's running other companies as well. I mean, if you take a look at Neurolink, SpaceX, uh, putting internet access in remote areas. So really, when you're looking at Tesla, you are also looking, we've seen Tesla be sort of a cultish stock, so to speak, throughout the pandemic. I think that continues because you are betting on Elon Musk, and it's kind of hard to bet against somebody that is leading so many other companies as well. It's it's interesting that this, um, compensation comes sort of, it's it's, it's kind of doubtful that he would be leaving Tesla, right? I mean, when investors think about Tesla, uh, you you don't really necessarily think that Elon Musk would be leaving the company, so to speak, because he is spearheading this new direction towards these robots, towards robo taxis. I mean, it's a clear pivot that the company is doing, and you don't imagine anyone else leading it but Elon Musk. Or maybe that robot in his. Let's just be honest. That's where things may be headed here. I mean, you don't have to pay probably pay this robot. Related Videos Why Berkshire Hathaway may face 'pressure' to pay cash dividend Musk's $29B award, Figma nosedives, OpenAI nears 700M users Elon Musk's $29B award may raise board independence concerns Wilbur Ross–backed BPGC taking iRocket public via $400M SPAC Erro ao recuperar dados Faça login para acessar seu portfólio Erro ao recuperar dados Erro ao recuperar dados Erro ao recuperar dados Erro ao recuperar dados

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store