🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
DeepSeek: A Wake-Up Call for Responsible Innovation and Risk Management
Source: Cointelegraph Original Text: "DeepSeek: A Wake-Up Call for Responsible Innovation and Risk Management"
Author of the viewpoint: Dr. Merav Ozair
Since its release on January 20, the DeepSeek R1 has attracted widespread attention from users as well as global tech tycoons, governments, and policymakers—from praise to skepticism, from adoption to bans, from the brilliance of innovation to immeasurable privacy and security vulnerabilities.
Who is right? Short answer: everyone is right, and everyone is wrong.
This is not the "Sputnik moment"
DeepSeek has developed a large language model (LLM) that performs comparably to OpenAI's GTPo1, but the time and cost required are only a fraction of what OpenAI (and other tech companies) spend to develop their own LLMs.
Through clever architectural optimization, DeepSeek significantly reduced the costs of model training and inference, enabling the development of an LLM in 60 days for less than $6 million.
Indeed, DeepSeek deserves recognition for actively seeking better ways to optimize model structures and code. This is a wake-up call, but it is far from being a "Sputnik moment."
Every developer knows that there are two ways to improve performance: optimize the code, or "smash" a large amount of computing resources. The latter is extremely costly, so developers are always advised to maximize architectural optimization before increasing computing resources.
However, as the valuations of artificial intelligence startups soar and massive investments flood in, developers seem to have become lazy. If you have a budget of billions of dollars, why spend time optimizing model structures?
This is a warning to all developers: return to the basics, innovate responsibly, step out of your comfort zone, break free from conventional thinking, and do not fear challenges to the norm. There is no need to waste money and resources—use them wisely.
Like other LLMs, DeepSeek R1 still has significant shortcomings in reasoning, complex planning capabilities, understanding of the physical world, and persistent memory. Therefore, there are no truly disruptive innovations here.
It is now time for scientists to go beyond LLMs, address these limitations, and develop a "new generation AI architecture paradigm." This may not be LLMs or generative AI — but rather a true revolution.
Paving the way for innovation.
The DeepSeek method may encourage developers worldwide, especially in developing countries, to innovate and develop their own AI applications, regardless of resources. The more people involved in AI research and development, the faster the pace of innovation and the more likely meaningful breakthroughs will be achieved.
This aligns with Nvidia's vision: to make AI affordable and to enable every developer or scientist to develop their own AI applications. This is exactly the significance of the DIGITS project announced earlier this January - a desktop GPU priced at $3000.
Humans need to "get everyone on board" to solve urgent problems. Resources may no longer be a barrier - it's time to break the old paradigms.
At the same time, the release of DeepSeek serves as a wake-up call for operational risk management and responsible AI.
Read the terms carefully
All applications have terms of service, which the public often overlooks.
Some alarming details in the DeepSeek Terms of Service that may affect your privacy, security, and even business strategy:
Data Retention: Deleting your account does not mean your data is deleted - DeepSeek still retains your data.
Monitoring: The application has the right to monitor, process, and collect user inputs and outputs, including sensitive information.
Legal Disclosure: DeepSeek is subject to Chinese law, which means that state agencies may access and monitor your data upon request - the Chinese government is actively monitoring your data.
Unilateral changes: DeepSeek can update the terms at any time without your consent.
Disputes and Litigation: All claims and legal matters are governed by the laws of the People's Republic of China.
The above actions clearly violate the General Data Protection Regulation (GDPR) as well as other GDPR privacy and security violations listed in the complaints filed by Belgium, Ireland, and Italy, which have consequently temporarily banned the use of DeepSeek.
In March 2023, Italian regulators temporarily banned OpenAI's ChatGPT from going online due to GDPR violations, which was only restored a month later after compliance improvements. Will DeepSeek also follow the path of compliance?
Prejudice and Censorship
Like other LLMs, DeepSeek R1 exhibits hallucinations, biases in the training data, and demonstrates behavior aligned with Chinese political positions on certain topics, such as censorship and privacy.
As a Chinese company, this is to be expected. Article 4 of the "Generative AI Law" applicable to AI system providers and users stipulates: this is a review rule. This means that those who develop and/or use generative AI must support the "core socialist values" and comply with relevant Chinese laws.
This does not mean that other LLMs do not have their own biases and "agendas". It highlights the importance of trustworthy and responsible AI, as well as the necessity for users to adhere to strict AI risk management.
Security vulnerabilities of LLM
LLMs may be subject to adversarial attacks and security vulnerabilities. These vulnerabilities are particularly concerning as they will affect any organization or individual building applications based on that LLM.
Qualys has conducted vulnerability testing, ethical risk, and legal risk assessments on the LLaMA 8B Lite version of DeepSeek-R1. The model failed in half of the jailbreak tests – i.e., attacks that bypassed the AI model's built-in security measures and ethical guidelines.
Goldman Sachs is considering using DeepSeek, but a security review is needed, such as for injection attacks and jailbreak testing. Regardless of whether the model originates from China, there are security risks for any company before using AI model-driven applications.
Goldman Sachs is implementing the right risk management measures, and other organizations should follow this practice before deciding to use DeepSeek.
Summarize experience
We must remain vigilant and diligent, implementing adequate risk management before using any AI systems or applications. To mitigate any "agenda" bias and censorship issues brought by any LLM, we could consider adopting decentralized AI, preferably in the form of decentralized autonomous organizations (DAOs). AI knows no borders, and perhaps now is the time to consider formulating unified global AI regulations.
Author of the viewpoint: Dr. Merav Ozair
Related topics: How decentralized finance (DeFi) can achieve secure scalable development in the era of artificial intelligence (AI)
This article is for general informational purposes only and should not be considered as legal or investment advice. The views, thoughts, and opinions expressed in this article are solely those of the author and do not necessarily reflect or represent the views and positions of Cointelegraph.