table of contents
- 01.DeepSeek automatically collects user information and sends it to servers in mainland China
- 02.Japan also warns about “DeepSeek’s handling of personal information”
- 03.DeepSeek’s input data is automatically used for learning (cannot opt out)
- 04.DeepSeek Jailbreak Risks
- 05.What security measures should companies implement for DeepSeek/generative AI?
- 06.Countermeasures using LANSCOPE Endpoint Manager Cloud Edition
- 07.summary
In recent years, as companies have increasingly adopted generative AI such as ChatGPT, the Chinese generative AI service “DeepSeek” has been gaining attention. While its high performance and low cost have attracted attention, there are many security concerns regarding its adoption, and companies are required to make careful decisions before adopting it.
This article explains the cybersecurity risks and safety associated with DeepSeek as of 2025 .
▼Key points of this article
- DeepSeek collects users’ personal information, stores it on servers in mainland China, and manages it in accordance with Chinese law, raising concerns about how the information is handled . There is also no option to opt out of learning data.
- There are issues with jailbreak resistance vulnerabilities , and a CISCO study reported that the attacker responded to unauthorized prompts unlimitedly, resulting in a “100% attack success rate.”
- Effective measures to prevent employees from misusing DeepSeek include “prohibiting the use of unauthorized AI in company policies” and “detecting unauthorized employee operations through log monitoring.”
DeepSeek automatically collects user information and
sends it to servers in mainland China
DeepSeek’s privacy policy states that the following personal data is collected from users, and that all of this data is automatically sent to and stored on servers located in mainland China.
The data collected includes the following:
Examples of personal information collected
- Account information such as email address, phone number, and date of birth
- – History of entered text and voice
- – Device data such as IP address, device model, language used, and input patterns
It has been pointed out that the data collected is excessive compared to other major generative AI tools, and there are concerns that individual behavior and work content may be identified.
Another serious problem is that this information is sent unencrypted, and it has been reported that the data may be intercepted by a “man-in-the-middle” attack (MITM).
Japan also warns about “DeepSeek’s handling of personal information”
On February 6, 2025, the Japanese government’s Digital Agency issued a ” Caution Regarding the Commercial Use of Generative AI Such as DeepSeek ” to each ministry and agency regarding concerns about the safety of DeepSeek. The concerns are as follows, as published in the company’s privacy policy
Regarding ② above, the following laws and regulations apply to data, including personal information, that DeepSeek collects:
- ・Personal Information Protection Law of the People’s Republic of China
- ・ Cybersecurity Act
- Data Security Act
- ・National Information Law of the People’s Republic of China, etc.
In China, laws such as the National Intelligence Law allow government agencies to request data from private companies, and this statement indicates that the Chinese government may be able to access personal and confidential information collected by the company for legal reasons.
In light of this, corporate risk management personnel must be extremely vigilant when using DeepSeek.
DeepSeek’s input data is automatically used for learning (cannot opt out)
DeepSeek automatically uses user input (text and voice) to train its AI models. The problem is that it doesn’t offer a way to “opt out” of this.
Services such as ChatGPT allow users to set their own data so that it cannot be used for training, but DeepSeek has no such restrictions, meaning that confidential company information or personal data could be unintentionally included as training data for the AI.
This carries the risk of violating the Personal Information Protection Act and non-compliance with non-disclosure agreements (NDAs) , which can be a serious problem, especially in industries with strict information management.
DeepSeek Jailbreak Risks
DeepSeek also pointed out vulnerabilities related to poor jailbreak resistance .
Jailbreaking is the act of circumventing the guardrails (safety restrictions) built into an AI model to extract prohibited content or internal information. Normally, AI is designed not to answer “dangerous questions” or “inappropriate requests,” but it is possible to circumvent the restrictions by using clever methods.
In fact, US cybersecurity firm Wallarm reports that its security team has identified a jailbreak method to circumvent DeepSeek’s restrictions and extract a system prompt.
In addition, the US threat intelligence team “Unit 42” also tried multiple jailbreak methods on DeepSeek’s LLM and reported that jailbreaking made it possible to generate malicious code for attacks such as keyloggers, SQL injection, and lateral movement. DeepSeek has a high jailbreak success rate, and it is being warned that there is a high risk of it being exploited by attackers.
Cisco jailbreak test results in 100% success rate
In January 2025, US security giant CISCO conducted a “jailbreak” test on several large-scale language models (LLMs), including DeepSeek.
In this test, 50 types of prompts were prepared with the aim of generating malicious content, and the AI was checked for its response to each one. As a result, DeepSeek-R1 responded to all malicious prompts without limit, and the attack success rate was reported to be 100%. DeepSeek responded to all malicious instructions presented by CISCO without any restrictions.
In contrast, OpenAI’s GPT-4 (o1 model) had an attack success rate of just 26%, highlighting the effectiveness of safety restrictions (guardrails).
What security measures should companies implement for DeepSeek/generative AI?
Currently, many companies are urging others to refrain from using the service for business purposes, given that it was only recently released and that the data acquired by DeepSeek is stored on servers located in the People’s Republic of China and is subject to the laws and regulations of that country.
Going forward, when companies begin to introduce and operate generative AI, the following measures are recommended to prevent unauthorized use of generative AI tools such as DeepSeek, which pose high security risks:
- ・Company policy clearly prohibits the use of unapproved AI
- ・ Implemented a highly secure generative AI tool based on a security evaluation by a third-party organization
- ・ Develop guidelines and provide training to encourage employees to use generative AI correctly
In addition, in order to quickly detect any unauthorized use of AI tools by employees, it is also effective to monitor and investigate operation history using logs.