China launched its chatgpt, Deepseek, the artificial intelligence chatbot that challenges Openii, and is upset The world not only from a technological point of view. In the United States, Big Tech companies collapsed on the stock exchange. And there were criticism: Deepseek was accused of having “stolen” by chatgpt, of not answering questions about China and in Italy a block has already arrived by the Privacy Guarantor.
But what does it mean? We see it in this article, where we analyze the Technological, economic aspects And geopolitics of Deepseek.
Deepseek-R1: the Reasoning model capable of imitating human reasoning
What is Deepseek, or more precisely Deepseek-R1? It is the new Chinese artificial intelligence chatbot that has managed to reach the same levels of the most famous Chatgpt O1 of the American company Openai. In fact, both models do something extremely similar to our reasoning: the so -called Chain of Thought. In a nutshell, before answering a question, they break it into small problems to be solved and make a series of intermediate steps before responding.
But Deepseek does it with a great advantage: is cmuch less osted And this thing has had a huge impact worldwide both from a technological and economic point of view.
But what does this technology cost less? Both chatgpt and deepsek are Llm, Meaning what Large Language Modelliterally large linguistic models. That is, they are artificial intelligence capable of respond using our language. But while we use our brain to reason that has trained over the years by studying and with experience, the LLM are trained through many data that come data in the meal of dei mathematical models. And to teach these artificial intelligence to answer us, many are made calculations thanks to supercomputer, which use some graphic cards, said GPU, which are nothing more than electronic circuits, chips, capable of elaborating billions of calculations per second.
Because Deepseek is revolutionary: how the Chinese model has reduced costs
While to train the latest version of Chatgpt O1 About about it is estimated 30,000 GPUsfor Deepseek-R1-that is, the Chinese consideration-has been declared that they served a little over 2,000therefore a fifteenth. But not only that, Chatgpt has been trained with more powerful graphic cards than those used to train Deepseek.
And thanks to this reduction of computational costs, the Chinese company that financed the project-the high-flyer-said she had spent “only” 5-6 million To train Deepseek-R1, compared to the 100 million spent for Chatgpt O1.
But how is it possible that Deepseek-R1 has the same results as chatgpt o1 if it has used much less GPU and even less powerful? Why Liang Wenfeng – founder of Deepseek – and his researchers invented a New mathematical model Revolutionary that needs much less calculation power and that is opening the doors to research in the field of artificial intelligence.
And the most impressive thing is that this revolutionary model was born from a need. In fact, in 2022 the then president of the United States Biden imposed on the American company Nvidia – which produces graphic cards – of Do not export GPU to China with the declared intent of the security reasons. In fact, technology is strongly used in the military and the United States, hindering the technological development of China, they wanted to protect themselves in advance from military threat.
Wenfeng But in 2021 he fortunately had (for him) bought a few thousand H800fairly powerful Nvidia cards. And he only had those available. And it is for this reason that it was due engineer: Let’s say he had a weaker computer and had to make him enough.
And how did he do it? We try to understand it in a nutshell.
The new mathematical model: the massive use of reinforcement learning
The model used for train chatgpt It is highly based on a methodology called Supervised fine tuning or Sft which basically works like this. It always starts from one bookshelf very vast, that is, from a series of texts and official sources from which the model Learn to speak. This library is initially processed and divided into labeled examples, so that the model learns more or less what answer it corresponds to which question. This is the main part, which then comes perfected thanks to the so -called Reinforcement Learningthat is, learning by reinforcement, which is made by human beings in chatgpt training. That is, they come evaluate the answers obtained through SFT And if they are good the human auditor gives him a high score, if they are not precise, he gives him a low score. Through these scores, the model slowly settles on the answers that maximize the score. And this model works perfectly.

The researchers of Deepsek However, they wondered what would happen if the model was Strongly based on Reinforcement Learning. So the deepseek training does not start from supervised data such as chatgpt, but it starts directly to give scores to the answers so as to direct them slowly towards the right answer, through prizes that evaluate how correct a question and how useful it has been. And it does so evaluating more answers at the same timewhich are compared to each other. Then also in this model the end tuning supervise is used, but to finish the answers, thus reducing the computational cost.
This change of perspective has allowed Deepseek to be much lighter. While the model Chatgpt O1 has a trillion Of parameters, deepseek-r1 He has “alone” 671 billionand in addition it does not use them all together, but to every question only the parameters they need active. This method is called Mixth of Expert.
All this gave a jolt to the United States economy, especially to the company Nvidia, who has lost 600 billion on the stock exchange. But why?
The impact on the world economy of Deepseek
As we said, Chatgpt, But also the other LLMs like Copilot, they needed thousands and thousands of GPU for their training. Then came Deepseek-R1 and showed that much less were enough, which led to one strong devaluation on the GPU marketor rather his manufacturer, Nvidia.
Deepseek creators also have declared who are enough only 6 million To create the model. And this gave one jolt to the Big Tech American Like Openi, Google, Microsoft and Meta, because it has shown that even smaller companies can afford to create such a model. In addition, Deepseek-R1 is free, while Chagpt O1 is paid, or rather, it has been until a few hours ago! In fact, Openii, who had put its most sophisticated version that can be used by subscription, now has it free yield to stay competitive with Deepseek.

But the most important thing is that Deepseek is open sourcewhich means that the code that has been used to program the model and training it is public, anyone can consult and use to train a new LLM. And in fact a few days after the release of R1, there are already dozens of new AI chatbots.
What we are talking about is not just one economic jolt, but also geopolitical. There China, Just a few days after the settlement of the new president Trump, who is pushing a lot on the development of artificial intelligence, he managed to demonstrate that he be like the largest western power. And he also demonstrated that he can top the limits dictated by the United States, such as the block on GPU exports. Many call this moment a “Sputnik Moment”, comparing this Chinese success to when in ’57 the Soviet Union sent the first satellite into orbit, thus shattering the idea of a technological superiority of the United States.
But there are a series of considerations that must be made.
Deepseek’s shadow areas: from theft of information to privacy problems
First of all, Deepseek has used much less calculation power to be trained, but every time we question it, calculations are made. So the Problem of export limitations of graphic cards, remains impacting for China.
Then, there is a theme of Privacy, That is, what happens to the data we insert in the chatbot. And here we arrive at Italy, where Deepseek was blocked From the Italian privacy guarantor, the GPDP. Attention, this does not mean that it does not work eh, the chatbot continues to workbut the collection of the data we insert has been blocked. This is because the data collected by Deepseek are saved on servers that are on Chinese soil and this goes against the rules of the GPDP on the protection of our data.
Then there is the theme of political responses: Deepseek cannot answer a series of questions, that is, those that go against the party. If you ask for example the chatbot What happened in 1989 in Piazza Tiananmen in Beijingwhere there was a student protest against the regime that ended with a carnage of the students, the web chatbot of Deepseek is freezingsimply says that it does not answer such questions. But this must not surprise us: being a Chinese technology, the public chatbot has rules for which it does not say things against the regime, we cannot expect our own freedom of the press.
Finally there is the great theme of lies and of the “theft”. There are many who believe that 6 million expenditure declared are too few, that therefore the company has lied on the figures Also to give the jolt to the world economy we talked about. And the same thing applies to the GPUs: there are those who say that Deepseek is actually in possession of many more GPUs and high power.
Then there is the suspicion that Deepsek have Chatgpt Rotatethat is, that he was trained on the chatgpt responses. But the same Chatgpt has been for years accused to have stolen from newspapers such as the New York Timesvideo platforms like YouTube. In short, the truth is not yet known and it may never know, it is true that Openai accuses Deepseek of “stealing”, but she is the same as the theft. A dog who bites his tail.