The Manhattan Project Was Secret. Should America’s AI Work Be Too?

DeepSeek is supercharging the debate over how much companies should share their AI knowledge

Christopher MimsFeb. 1, 2025 at 12:02 am

Tim Dettmers is one of the scientists at the cutting edge of artificial intelligence who contributed to the DeepSeek breakthrough that grabbed the world’s attention this past week.

Dettmers, a researcher at Seattle’s Allen Institute for Artificial Intelligence who previously worked for Meta Platforms, pioneered a new way to train, and run, an AI model using less powerful hardware. He published his work in 2021. When the DeepSeek team recently published its own papers on how they had built their models, he discovered his paper among their citations. It turns out they were eager readers of his work.

The AI research community has a culture of publishing scientific papers that explain new breakthroughs in detail, and of making models available for anyone to use. That’s the approach that Meta has adopted under Mark Zuckerberg, and that DeepSeek is using. It’s also the ethos driving buzzy French AI startup Mistraland many other cutting-edge AI companies and research institutions.

With the unveiling of DeepSeek’s latest AI models, we are seeing how the sharing of all that knowledge allowed its team—who by their own account leveraged techniques developed by engineers spread across the world—to leapfrog much better-resourced AI teams in the U.S. It is supercharging a high-stakes debate about how much Americans should share about their AI breakthroughs. 

“It feels good,” Dettmers said when I asked him how it felt to have contributed to what some are calling a “Sputnik moment” in AI. But, he added, the best part wasn’t seeing his work—published while he was at the University of Washington—implemented. It was the possibility that because the DeepSeek team had also published a detailed paper on how they used his innovation, he and others could in turn build on their work, and create an even better model.

Artificial-intelligence powerhouse OpenAI has been in some ways a notable exception to this culture of sharing, and there are accusations that perhaps DeepSeek achieved its big leap forward in part by “distilling” OpenAI’s models. Distillation is the exfiltration of a model’s knowledge, and can be used in lieu of, or to supplement, traditional training models. But don’t let that distract from the debate about academia-style sharing of knowledge, the creation of models like DeepSeek’s that are free to download and use, and the publication of open-source code for building them. These are the matters that will determine the winners and losers in the AI race far into the future.

Investors like Marc Andreessen, deans of the field of AI research like Yann Lecunn, and many others who fund and build AI argue that sharing will ultimately benefit all of humanity. The opposing camp includes people—most notably investor Vinod Khosla—who say doing so poses a risk to national security.

For them, having the best AI is like having the best engines or automobiles—things that in the past have determined which countries were the wealthiest, and could dictate terms of trade to others. And when it comes to weapons, Hollywood movies and science-fiction novels have been priming us for decades to understand the promise and peril of having an artificial superintelligence on our side. Some AI-startup founders say this fiction is on the cusp of becoming reality.

Khosla, founder of venture-capital firm Khosla Ventures, and the first outside investor in OpenAI, has compared the open-source approach with AI to sharing the details of the Manhattan Project.

Dario Amodei, chief executive of OpenAI competitor Anthropic, wrote soon after the release of DeepSeek that it strengthens the case for export controls on advanced AI chips. Those controls started under the Biden administration to curb Chinese AI development by barring export of certain types of advanced chips, and they’ve been tightened over time.

It’s fair to say that the majority of engineers who build AI disagree with the idea that AI development should be kept secret.

“The only reason that the U.S. has been the center of innovation for AI is because we have embraced for decades an ethos of open publishing,” says AI investor Anjney Midha, a general partner at famed venture-capital firm Andreessen Horowitz. 

Case in point: The new kind of AI that has enabled the current boom was invented at Google in 2017. But soon after, when engineers at the company tried to apply that technique to language, the result was a paper concluding that massive language models probably weren’t the way to go. Because they had published their results, engineers at OpenAI came to the opposite conclusion, and the result was GPT-3, the breakthrough that led to ChatGPT and touched off the latest wave of AI capabilities and investment, says Midha.

But OpenAI hasn’t released its models openly, in a way that allows anyone else to run them. Having raised billions of dollars from investors, the company has a business model where it charges for access to those models. Yet at the end of what has been a wild week in the world of artificial intelligence, OpenAI CEO Sam Altman suggested that could change. In an “ask-me-anything” session on Reddit Friday, a participant asked Altman if the ChatGPT maker would consider releasing some of the technology within its AI models and publish more research showing how its systems work. Altman said OpenAI employees were discussing the possibility.

“i personally think we have been on the wrong side of history here and need to figure out a different open source strategy,” Altman responded. Still, he said, “not everyone at openai shares this view, and it’s also not our current highest priority.”

In the past, the performance of free-to-use models like the one DeepSeek released—even those from Meta—hasn’t been as good as those offered by companies that keep their models and innovations to themselves, such as OpenAI.

By closing the performance gap with leading AI models, while purportedly using far fewer resources, the company behind DeepSeek has opened the door to a future in which far more organizations and nations will be able to build cutting-edge models, says Dettmers of the Allen Institute.

DeepSeek hasn’t described in detail how it intends to profit from AI, but CEO Liang Wenfeng has made it clear that’s not his priority at the moment. He believes it’s more important to establish a strong ecosystem first—because if a company releases its software code, it will attract more users, who can then suggest improvements to the code. Liang has said that keeping software proprietary isn’t as critical for maintaining a competitive advantage as many believe.

“The moat formed by closed source is short-lived in the face of disruptive technologies,” he told Chinese tech publication 36Kr last year. “Even if OpenAI is closed-source, it won’t stop others from catching up to it.” 

On the morning I caught him for an interview, Thomas Sohmers, chief technology officer and founder of AI hardware startup Positron, was working to make the latest DeepSeek models work on his company’s custom AI computers. For founders like Sohmers, the triumph of open-source, general-purpose AI models is far from a foregone conclusion.

Sohmers believes that the rise of free-to-use models like DeepSeek could have a paradoxical effect on the AI industry: It could lead to more proprietary models that are neither open source nor free to use. Because DeepSeek published the techniques it used, more people can plausibly build AI models that are cheaper to train and to run, using their own proprietary data, and which are specialized for different industries.

Take Sohmers as an example: His company needs the help of AI to design new microchips for the computers it sells. Cheaper-to-train AI models could allow his team to create better ways of designing microchips. And he wouldn’t have to share what his AI creates.

Midha says he believes that, essentially, the AI genie is out of the bottle. Attempts to keep U.S. AI research secret would only make U.S. companies and AI labs less competitive, by cutting them off from the global exchange of knowledge happening at a feverish pace inside of China and all over the world.

“AI has become infrastructure for most modern countries,” he adds. “I think that if we ban it, the only thing we do is ensure that other countries who need an allied partner will go to the Chinese Communist Party, or whoever is providing them the best open models.”

—Raffaele Huang contributed to this article.

Excerpts: The Wall Street Journal

  • Nepal News Agenacy Pvt. Ltd.

  • Putalisadak, Kathmandu Nepal

  • 01-4011122, 01-4011124

  • news@nepalpage.com

  • Department of Information and Broadcasting Regd.No. 2001।077–078

©2025 Nepal Page | Website by appharu.com

Our Team

Editorial Board