Nvidia is worth $2.6 trillion — larger than two Metas, three Berkshire Hathaways, or five ExxonMobils. Goldman Sachs called it “the most important stock on planet Earth” for its centrality in the booming AI industry, and his company will likely be worth more than Apple in a few months. As of Friday, Huang is the 17th-richest person in the world, with an estimated $91 billion to his name, according to Bloomberg. That’s more than double what he was worth on Christmas. At this rate, Huang — whose public image is far from the flamboyant edgelord tech bro who has become so common among the Silicon Valley’s C-suites — could become richer than Elon Musk by 2025 and his company more valuable than any other in the world. (Or, of course, the stock could stop going straight up, as it did a few weeks ago, since lots of people seem to think it’s gotten way overvalued.)
AI, as a technology, is still pretty uneven. OpenAI’s most advanced public version can pick stocks better than humans, while Google’s new AI-powered chatbot, Gemini, thinks you should eat rocks. (You should not eat rocks.) Huang, though, doesn’t really care very much about that, at least as far as his own personal fortune is concerned. AI software requires a huge amount of processing power, regardless of how right or wrong the actual program’s answers may be, and Huang’s company more or less has the market cornered on making the kinds of computer chips that can handle that. Even the stupidest AI is going to need a lot of Nvidia’s chips, called graphics-processing units.
Since there is such a fervent belief among the Silicon Valley set that AI will one day achieve superhuman intelligence, there’s a tremendous incentive for just about every tech company to make that technology a core part of its operations. Huang’s business, though, is today’s equivalent of selling shovels during a gold rush. Many, if not most, of the companies vying to be the next big thing in A.I. will go bust — and Nvidia will have long pocketed their money.
by Kevin T. Dugan, Intelligencer | Read more:
Image: David Paul Morris/Bloomberg via Getty Images
[ed. See also: Jensen Huang’s Homes: Inside the Nvidia CEO’s Property Portfolio (Mansion Global).] (ed. Mansion Global?). Also, from one of the article's links: Financial Statement Analysis with Large Language Models (pdf):]
Abstract: We investigate whether an LLM can successfully perform financial statement analysis in a way similar to a professional human analyst. We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them to determine the direction of future earnings. Even without any narrative or industryspecific information, the LLM outperforms financial analysts in its ability to predict earnings changes. The LLM exhibits a relative advantage over human analysts in situations when the analysts tend to struggle. Furthermore, we find that the prediction accuracy of the LLM is on par with the performance of a narrowly trained state-ofthe-art ML model. LLM prediction does not stem from its training memory. Instead, we find that the LLM generates useful narrative insights about a company’s future performance. Lastly, our trading strategies based on GPT’s predictions yield a higher Sharpe ratio and alphas than strategies based on other models. Taken together, our results suggest that LLMs may take a central role in decision-making. (...)
Conclusions: Our results suggest that GPT’s analysis yields useful insights about the company, which enable the model to outperform professional human analysts in predicting the direction of future earnings. We also document that GPT and human analysts are complementary, rather than substitutes. Specifically, language models have a larger advantage over human analysts when analysts are expected to exhibit bias and disagreement, suggesting that AI models can assist humans better when they are under-performing. Humans, on the other hand add value when additional context, not available to the model is likely to be important.
Furthermore and surprisingly, GPT’s performance is on par (or even better in some cases) with that of the most sophisticated narrowly specialized machine learning models, namely, an ANN trained on earnings prediction tasks. We investigate potential sources of the LLM’s superior predictive power. We first rule out that the model’s performance stems from its memory. Instead, our analysis suggests that the model draws its inference by gleaning useful insights from its analysis of trends and financial ratios and by leveraging its theoretical Financial Statement Analysis with Large Language Models 30 understanding and economic reasoning. Notably, the narrative financial statement analysis generated by the language model has substantial informational value in its own right. Building on these findings, we also present a profitable trading strategy based on GPT’s predictions. The strategy yields higher Sharpe ratios and alphas than other trading strategies based on ML models. Overall, our analysis suggests that GPT shows a remarkable aptitude for financial statement analysis and achieves state-of-the-art performance without any specialized training.
Although one must interpret our results with caution, we provide evidence consistent with large language models having human-like capabilities in the financial domain. Generalpurpose language models successfully perform a task that typically requires human expertise and judgment and do so based on data exclusively from the numeric domain. Therefore, our findings indicate the potential for LLMs to democratize financial information processing and should be of interest to investors and regulators. For example, our results suggest that generative AI is not merely a tool that can assist investors (e.g., in summarizing financial statements, Kim et al., 2023b), but can play a more active role in making informed decisions. This finding is significant, as unsophisticated investors might be prone to ignoring relevant signals (e.g., Blankespoor et al., 2019), even if they are generated by advanced AI tools. However, whether AI can substantially improve human decision-making in financial markets in practice is still to be seen. We leave this question for future research. Finally, even though we strive to understand the sources of model predictions, it is empirically difficult to pinpoint how and why the model performs well."
Authors: Kim, Alex G. and Muhn, Maximilian and Nikolaev, Valeri V., Financial Statement Analysis with Large Language Models (May 20, 2024). Chicago Booth Research Paper Forthcoming, Fama-Miller Working Paper, Available at SSRN: https://ssrn.com/abstract=4835311 or http://dx.doi.org/10.2139/ssrn.4835311