In this interview, Chinese AI expert Mu-ming Poo argues that DeepSeek’s accomplishments are an example of China’s historic excellence in putting new technologies to wide practical use. Poo contends that China should create academic scholarship programs modeled after DeepSeek’s company culture of small teams of young, ambitious researchers working largely without supervision. He also predicts that LLM- and brain-inspired approaches to AI will converge in the next five years as the humanoid robot industry takes off.
An archived version of the Chinese source text is available online at: https://perma.cc/T25Q-JFD4
Mu-ming Poo: The Scientists Most Needed Today are Those Who are “Driven to Catch Big Fish,” Not Those Who Only Care About “Fishing for the Joy of it”
The five-year plan for the second phase of the 2030 Major S&T Project “Brain Science and Brain-Inspired Research” (the “China Brain Project”) is currently being formulated. Large language models (LLMs) seem to be more powerful than the human brain in many respects. Do we still need to study the brain? What insights has the DeepSeek “frenzy” given the Chinese scientific community? This Liberation Daily reporter sat down for an exclusive interview with an important leader in the field of brain science in China—Academician Mu-ming Poo (Pu Muming; 蒲慕明), Academic Director of the Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) of the Chinese Academy of Sciences (CAS).
[“Going from 1 to 100” at an unprecedented speed]
Liberation Daily: What is your evaluation of DeepSeek’s “emergence out of nowhere?”
Mu-ming Poo: 2025 will go down as one of the most memorable years for Chinese science in recent decades. The reasoning artificial intelligence (AI) model DeepSeek released, R1, is comparable in performance to the strongest model in the field, OpenAI-o1. It has cleverly used a mix of expert models, reinforcement learning, and distillation methods to achieve unexpected inference efficiency. And for the first time, it shows users the reasoning “chain of thought.” Surprisingly, the cost of pre-training the model was far lower than that of current top LLMs.
Of course, DeepSeek-R1 is not a groundbreaking new technology like the backpropagation algorithm or the Transformer (transformation model) architecture, which triggered the two AI revolutions of deep learning and LLMs, respectively. The backpropagation algorithm has also been recognized with the 2024 Nobel Prize in Physics. What I would like to emphasize, however, is that although DeepSeek-R1 is not an original breakthrough “from 0 to 1,” it is AI development “from 1 to 100” at an unprecedented speed. It clearly shows that development “from 1 to 100” may have a more significant impact than development “from 0 to 1.”
To view the rest of this translation, download the pdf below.