Generative AI: A few bad guesses
NBR Articles, published 30 May 2023
This article, by Te Ahumairangi Equity Analyst Prithvi Sharma,
originally appeared in the NBR on 30 May 2023.
In a year in which there seems to be no shortage of newsworthy happenings to make a headline, it feels a little uncanny that a computer science model seems to have stolen the front page. Day after day, it feels, OpenAI’s ‘ChatGPT’ finds its way into our conversations, both human and artificial.
The interest speaks to the enormous and seemingly daunting potential of Generative Artificial Intelligence to shape society in ways that are not yet obvious. This uncertainty hasn’t, however, deterred sweeping assertions about the future impacts of Large Language Models (LLMs), the form of model to which ChatGPT belongs. ‘Half of all jobs automated by 2030’ and ‘a 10% boost to GDP’ are a couple of favourites I have come across. It does feel like these sorts of proclamations do a lot more to add to the collective anxiety this new(-ish) technology has invoked as opposed to helping us make sense of an uncertain future.
Let me try, then, to make some guesses that are a little bit more tangible, and in doing so hopefully shed some light on Generative AI and its possible effects.
But first, a caveat: predicting the impact of transformative technology is challenging and what we can probably be most confident of is that these guesses will be wrong. The numerous errors that commentators and equity markets made in predicting the winners and losers from the internet is evidence of this. For example:
- Many of the sectors that were predicted to win from the internet have struggled over the past two decades (think telcos, yellow pages businesses, incumbent media companies).
- Many of the “first movers” struggled or went out of business either because larger existing businesses moved into their turf (for example, telcos competed with internet service providers, and Microsoft competed with Netscape) or better offerings came to market (e.g. Google out-competed Yahoo! and other web directories).
- Many of the early hardware enablers of the internet that enjoyed massive margins in the early days of the internet boom (e.g. Cisco, Alcatel, Nortel, Intel) subsequently struggled as increased competition and slowing growth in demand eroded the margins they initially enjoyed.
Before we begin, it probably makes most sense to describe what Generative AI and the Large Language Models available today are, and perhaps more importantly, what they are not.
Large Language Models like ChatGPT are a type of artificial intelligence which utilise a form of deep learning called artificial neural networks, inspired by the human brain. These models are trained on vast amounts of text so that the networks can identify patterns and when prompted, generate the most likely sequence and arrangement of words based on the patterns learned from training. Eerily coherent and convincing they may seem; these models do not have consciousness.
Effectively, these ‘stochastic parrots’ are extraordinarily efficient guessing machines. Now let’s get started on some guesses of our own.
Nvidia: The foundations.
Nvidia is one among a handful of large-cap technology stocks that have contributed most to the positive return of developed-world equity-market indices this year, having more than doubled in value since the start of the year.
Nvidia designs and develops specific kinds of semiconductors and enjoys a near-monopoly selling the Graphics Processing Units (GPUs) required to optimise weights of the neural network in an artificial intelligence model. Neural networks in today’s ground-breaking LLMs have in the order of 100’s of billions to possibly well over a trillion parameters.
It probably isn’t a stretch to ascribe much of Nvidia’s share price performance on hopes that the potential of generative AI bear fruit in the form of massive orders for Nvidia’s datacentre GPUs at high prices. But what would that look like?
Even though Nvidia’s GPUs are very powerful in their ability to decompose matrix multiplications and compute many such computations in parallel, it still takes close to 5000 of Nvidia’s best GPUs working simultaneously for several months to train a large language model with around a trillion parameters. Today, Nvidia’s most powerful H100 processors, released at the end of 2022, can cost over US$40,000. Now let’s take a measured guess with the following loose assumptions:
- Considering that over 100 LLMs with parameters in the order of >10s of billions of parameters have been trained and published publicly in the last year, we can probably assume that 100 are being trained this year on the latest GPUs.
- That each of these models requires about 5000 H100 GPUs including some for redundancy.
- Each costing about US$40,000.
- That each model takes 4 months to train from inception.
So, the cost of the GPUs required for a model are US$40,000 x 5000 = US$200 million but they are used three times a year since many of the same companies are building several models and sometimes on shared infrastructure. Hence, the pro-rata cost of training a model is US$200 million / 3 = US$66.6 million. We multiply this by the 100 models and we get 100 x US$66.6 million = US$6.66 billion dollars.
But that is just to train the model, it is more computationally expensive to write than it is to read. Then again only some of the models will be used as much as ChatGPT is today. We’re just trying to make a guess here so I won’t bother with fine-tuning my butter-knife level of precision any further and assume that there is a round figure of $10 billion dollars in extra revenue Nvidia could generate from new generative artificial intelligence models, in the next year alone.
An incremental $10 billion in revenues would represent an almost 40% increase from last year if they are able to maintain the same amount of revenues from their other products (turning a blind eye to the likely cannibalisation effect of the H100). With some price increases and even more models, we could possibly envisage Nvidia doubling its revenues in 3 years off the back of new Large Language Models.
If Nvidia’s revenues and profits continue to grow at that clip beyond the initial boom, then Nvidia’s valuation could be warranted. Nvidia marks up the price of its chips over the cost to manufacture by over 100% on average and probably many times over for their newest AI chips. Nvidia’s major customers, the public cloud behemoths Microsoft, Amazon, Google, and Meta are not paying up gleefully. Nvidia attributes much of the value proposition to the proprietary software and interconnects on top of the hardware, as it accepts that the hardware is competitive with its nearest competitors.
Sure enough, each of the public cloud titans are developing their very own GPUs and AI specific chips tailored to fit their needs. Because just like Nvidia, those companies are very good at making software.
[Source: Nvidia Annual Report]
So, if I had to hazard a guess, the operative word being hazard, I would guess that Nvidia may well report and guide for revenues higher than estimates have for the immediately foreseeable future, seemingly justifying its recent doubling in market valuation. In the long run though, my guess would be that Nvidia’s profitability and investment proposition will suffer as their most important clients transition from customers to competitors.
Disney: the second order effect
A little less obvious but likely more intuitive is the effect on a more recognisable household name, Disney. Large Language Models are compelling in their ability to mimic the idiosyncrasies and undercurrents unique to human conversation, which makes them very good at chatting but also, potentially, as aides in writing dialogue.
Add to this, the text-to-image generation capabilities of other forms of generative AI (Mid-Journey and Stable Diffusion for example). Storyboards could theoretically be put together in a fraction of the time. With much of Disney’s content based on computer-generated-imagery or entirely animated, this technology may indeed be well-suited. The proposition is easy to dismiss when we think to apply it to a potential Oscar winner. The myriad children’s shows with 100’s of episodes in a year and simple animations, however, generative AI could surely handle.
Let’s assume that writers employed at Disney, armed with the latest generative AI models, can turn around 3 scripts in the time they used to take to write 2 and their animation and production counterparts can do the same. Disney spends about US$17 billion per year on creating video content. If we assume that about a third or US$6 billion of this cost relates to script writing and animation, Disney could potentially enjoy a US$2 billion reduction in costs from enhanced productivity of script writing and animation. If (say) US$0.5 billion of this saving is offset by the costs of buying in AI, the net saving to Disney could be US$1.5 billion. Cost savings of this magnitude could potentially add about 23% to Disney’s bottom line.
Disney has not participated in the ‘AI bull market’ of 2023 even though it perhaps is likely to be a beneficiary of more efficient content generation in the long run. I would guess that Disney does indeed see more of an improvement in its operating model from Generative AI than most companies even if it falls short of the scenario I described. The productivity benefit would equally exist for Disney’s competitors, making it critical for the company to be able to use it to differentiate its offering. If it doesn’t, generative AI could simply lead to another race to the bottom for profitability as has happened elsewhere when innovative technologies have made industries more competitive.
The real issue for Disney I imagine, as well as for its content streaming counterparts, will not be content productivity. But that there are only so many people, so many shows and required subscriptions, with only so much spare cash consumers are willing to spend on streaming services, when switching between the content ‘flavour-of-the-month’ takes less than five clicks on an iPhone.
Regulation: snatched away as quickly as we got it?
My last guess is one in a realm wherein I have even less insight, so I suggest taking it with a few even larger pinches of salt, the Himalayan kind, perhaps. As the reality of a world abound with Generative AI has fallen upon us, after that brief period of fear and excitement, the focus has shifted first from ‘incredulation’ to speculation and now to regulation.
Could it be that governments will stop us from using some forms of artificial intelligence, based on a view that the potential emergent effects of many such models acting with their own biases and agency are neither predictable nor estimable and may ultimately be dangerous. Last month Italy blocked ChatGPT before resuming its service with a few caveats.
It's difficult to stake a position on a company's stock based on speculations around government policy. Yet, when a company's valuation doubles in response to a development that might provoke governmental intervention, it pays to think about the risks of such an outcome.
I would guess that the allure of generative AI as an economic boon might seem too promising to ignore and the potential adverse outcomes too distant and abstract to today's governments, making it unlikely for them to hit the 'off' switch on generative AI.
So as of today, I am betting $25 a month that I will continue to have access to ‘ChatGPT plus’. You might like to guess whether, or not, it wrote this article.
Prithvi Sharma is an Equity Analyst at Te Ahumairangi Investment Management
Disclaimer: This article is for informational purposes only and is not, nor should be construed as, investment advice for any person. The writer is an employee and shareholder of Te Ahumairangi Investment Management Limited. Te Ahumairangi manages client portfolios (including the Te Ahumairangi Global Equity Fund) that invest in global equity markets. These portfolios include holdings of Disney.