Was Monday’s Tech Wreck Warranted or are Predictions of Nvidia’s Doom Premature?

DeepSeek

Silicon Valley venture capitalist Marc Andreessen called the debut of DeepSeek’s R1 AI model “AI’s Sputnik moment,” a reference to the Soviet Union’s launch of the first satellite into orbit in 1957. The realization that the Soviets had beat us into space intensified fears that America had fallen behind in technology. This seminal event triggered the Space Race.

Likewise, the introduction of DeepSeek’s new model seems to have sparked a similar reaction in the AI industry.

Reports that DeepSeek’s newest models, V3 and R1, were on par with OpenAI’s ChatGPT, and developed for a fraction of the cost, sent AI chip maker (and previous market darling) Nvidia and other AI stocks into a tailspin on Monday. By the end of the trading day, Nvidia, the hardest hit, had lost nearly $600 billion – or 17% – of its market value.

CNBC reported that despite the day’s carnage, Nvidia itself, called DeepSeek’s R1 model “an excellent AI advancement.”

DeepSeek, a Chinese AI startup, was founded in 2023 by Chinese hedge fund trader Liang Wenfeng. The company “released its open-source model for download in the United States in early January, where it has since surged to the top of the iPhone download charts, surpassing the app for OpenAI’s ChatGPT,” according to Forbes.

Many experts in the field have not only declared that DeepSeek’s R1 is equal to ChatGPT, but the company claims that it cost just $5.6 million to train. In contrast, U.S. tech companies have spent hundreds of millions on training their models.

Here’s what’s being said about DeepSeek’s latest offerings:

So, what does the introduction of R1 mean for Nvidia which was, until Monday, the most magnificent of the stock market’s “Magnificent Seven” companies? Was the hysteria warranted or are predictions of Nvidia’s doom premature?

Fortune’s technology reporter Jeremy Kahn provided a lengthy explanation for why he believes “DeepSeek’s impact could, counterintuitively, increase demand for advanced AI chips.” (Although the article is behind a paywall, it was published on YahooFinance.)

The reason is partly due to a phenomenon known as the Jevons Paradox.Named for 19th Century British economist William Stanley Jevons, who noticed that when technological progress made the use of a resource more efficient, overall consumption of that resource tended to increase. This makes sense if the demand for something is relatively elastic—the falling price due to the efficiency improvement creates even greater demand for the product.Jevons Paradox could well come into play here. One of the things that has slowed AI adoption within big organizations so far has been how expensive these models are to run. This has made it hard for business to find use cases that can earn a positive return on investment, or ROI. This has been particularly true so far for the new “reasoning” models like OpenAI’s o1. But DeepSeek’s models, especially its o1 competitor R1, are so inexpensive to run that companies can now afford to insert them into many more processes and deploy them for many more use cases. Taken across the economy, this may cause overall demand for computing power to skyrocket, even as each individual computation requires far less power.

He cites a social media post from former Intel CEO Pat Gelsinger who wrote that “computing obeys the gas law. Making it dramatically cheaper will expand the market for it…this will make AI much more broadly deployed. The markets are getting it wrong.”

Kahn continues:

Another reason that demand for advanced computer chips is likely to continue to grow has to do with the way reasoning models like R1 work. Whereas previous kinds of LLMs became more capable if they used more computer power during training, these reasoning models use what is called “test time compute”—they provide better answers the more computing power they use during inference. So while one might be able to run R1 on a laptop and get it to output a good answer to a tough math question after, say, an hour, giving the same model access to GPUs or AI chips in the cloud might allow it to produce the same answer in seconds. For many business applications of AI, latency, or the time it takes a model to produce an output, matters. The less time, generally the better. And to get that time down with reasoning models still requires advanced computing chips.

Finally, Kahn believes “it’s entirely possible DeepSeek has been less than truthful about how many top-flight Nvidia chips it has access to and used to train its models.”

Many AI researchers doubt DeepSeek’s claims about having trained its V3 model on about 2,000 of Nvidia’s less capable H800 computer chips or that its R1 model was trained on so few chips. Alexandr Wang, the CEO of AI company Scale AI, said in a CNBC interview from Davos last week that he has information that DeepSeek secretly acquired access to a pool of 50,000 Nvidia H100 GPUs (its latest model). It is known that HighFlyer, the hedge fund that owns DeepSeek, had amassed a substantial number of less capable Nvidia GPUs prior to export controls being imposed. If this is true, it is quite possible that Nvidia is in a better position than investor panic would suggest—and that the problem with U.S. export controls is not the policy, but its implementation.

At 12 p.m. ET on Tuesday, the price of NVDA stock was holding onto a 5-point gain on volume of 281 million shares, already surpassing its 30-day average daily trading volume of 209 million shares.

The news from DeepSeek on Monday came as a lightning bolt. Investors panicked and overreacted as they always do. The stock market is driven by fear and greed, and at least for the moment, they look to be about equal.

Prior to Monday’s tech wreck, shares of Nvidia had been priced to perfection. In other words, it was poised for a correction. All it needed was a little push. And DeepSeek gave it a shove.

A warning:

Before downloading the app, be aware that, like TikTok and other Chinese apps, China will be collecting your data including “your IP, keystroke patterns, device info … and stor[ing] it in China, where all that data is vulnerable to arbitrary requisition from the [Chinese] State.”

Apparently, your keystroke patterns, how fast you type, how long you press on each key, and the amount of pressure applied, are as unique as your fingerprints and are now being used as a tool to identify individuals. This method is “a form of behavioral biometrics” called “keystroke dynamics.” 


Elizabeth writes commentary for The Washington Examiner. She is an academy fellow at The Heritage Foundation and a member of the Editorial Board at The Sixteenth Council, a London think tank. Please follow Elizabeth on X or LinkedIn.

Tags: Artificial Intelligence (AI), China, Technology

CLICK HERE FOR FULL VERSION OF THIS STORY