“How Does OpenAI Survive?” | naked capitalism
Yves here. While we are all waiting for the next shoe to drop in the Middle East escalation drama, it seemed useful to look at some important real economy issues. A biggie is the prospects for AI, and specifically, OpenAI.
Ed Zitron reviewed and advanced his compelling case against OpenAI last week in a weighty post last week (estimated 31 minute read). Since his argument is both multi-fronted, detailed, and well documented, I am concerned that our recap here will not do justice to his substantial body of work. I therefore urge those who take issue with Zitron’s case to read his post to verify that the apparent shortcomings are due to my having to leave huge swathes of his argument on the cutting room floor.
Before turning to Zitron’s compelling takedown, the fact that AI’s utility has been greatly exaggerated does not mean it is useless. In fact, it could have applications in small firm settings. The hysteria of some months back about AI posing a danger to humanity was to justify regulation. The reason for that, in turn, was that the AI promoters woke up to the fact that there were no barriers to entry in AI. Itty bitty players could come up with useful applications based on itty bitty training sets. Think of a professional services firm using AI to generate routine letters to clients.
Some hedge funds have made a much higher end application, that of so-called black box trading. I will confess I have not seen any performance stats on various strategies (so-called quantitative versus “event-driven” as in merger arbitrage versus market neutral versus global arbitrage and a few other flavors). However, I do not recall any substrategy regularly outperforming, much the less an AI black box. I am sure the press would have been all over it were there to be a success in this arena.
Back to Zitron. He depicts OpenAI as the mother of all bezzles, having to do many many impossible or near impossible thing to survive. Recall the deadly cumulative probability math that applies to young ventures. If you have to do seven things for the enterprise to prosper, and the odds of succeeding at each one is 90%, that’s a winner, right?
Nope. Pull out a calculator. .9 x .9 x .9. x 9 x 9. x .9 x .9 = .478, as in less than 50% odds of success.
He also compares OpenAI to Uber, very unfavorably. We have to quibble about his generous depiction of Uber as meeting a consumer need. That becomes dubious when you realize that Uber inherently a high cost provider, with no barriers to entry. Its popularity rests substantially on investors massively subsidizing the cost of the rides. If you were getting a seriously underpriced service, what’s not to like?
One mistake we may have made in our analysis of Uber is not recognizing it as primarily an investment play. Recall that in the 1800s in the US, railroad after railroad was launched, some with directly competing lines. Yet despite almost inevitable bankruptcies, more new operators laid more track. Why? These were stock market plays (one might say swindles), with plenty of takers despite the record of failure.
Uber and the recent unicorns were further aided and abetted by venture capital investors using crude valuation procedures that had the effect of greatly increasing enterprise value, and thus making these investments look way more attractive than they were.
Zitron’s thesis statement:
I am hypothesizing that for OpenAI to survive for longer than two years, it will have to (in no particular order):
- Successfully navigate a convoluted and onerous relationship with Microsoft, one that exists both as a lifeline and a direct source of competition.
- Raise more money than any startup has ever raised in history, and continue to do so at a pace totally unseen in the history of financing.
- Have a significant technological breakthrough such that it reduces the costs of building and operating GPT — or whatever model that succeeds it — by a factor of thousands of percent.
- Have such a significant technological breakthrough that GPT is able to take on entirely unseen new use cases, ones that are not currently possible or hypothesized as possible by any artificial intelligence researchers.
- Have these use cases be ones that are capable of both creating new jobs and entirely automating existing ones in such a way that it will validate the massive capital expenditures and infrastructural investment necessary to continue.
I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.
And, quite simply, any technology requiring hundreds of billions of dollars to prove itself is built upon bad architecture. There is no historical precedent for anything that OpenAI needs to happen. Nobody has ever raised the amount of money it will need, nor has a piece of technology required such an incredible financial and systemic force — such as rebuilding the American power grid — to survive, let alone prove itself as a technology worthy of such investment.
To be clear, this piece is focused on OpenAI rather than Generative AI as a technology — though I believe OpenAI’s continued existence is necessary to keep companies interested/invested in the industry at all…
What I am not saying is that OpenAI will for sure collapse, or that generative AI will definitively fail…my point here is to coldly explain why OpenAI, in its current form, cannot survive longer than a few more years without a stunning confluence of technological breakthroughs and financial wizardry, some of which is possible, much of which has no historic precedence.
Zitron starts by looking at the opaque but nevertheless apparently messy relationship between Microsoft and OpenAI, and how that might affect valuation. This is a bit weedy for a generalist reader but informative both for tech industry and finance types. Because this part is of necessity a bit dense, we suggest you go to the Zitron post to read it in full.
This discussion segues into the question of funding. The bottom line here (emphasis original):
Assuming everything exists in a vacuum, OpenAI needs at least $5 billion in new capital a year to survive. This would require it to raise more money than has ever been raised by any startup in history, possibly in perpetuity, which would in turn require it to access capital at a scale that I can find no comparable company to in business history.
Zitron goes through the pretty short list of companies that have raised ginormous amounts of money in the recent past and argues that OpenAI is much more of a money pit, simply from a burn rate and probable burn duration perspective.
He then drills into profitability, or the lack thereof, compounded by what in earlier days would have been called build-out problems:
As I’ve written repeatedly, generative AI is deeply unprofitable, and based on the Information’s estimates, the cost of goods sold is unsustainable.
OpenAI’s costs have only increased over time, and the cost of making these models “better” are only increasing, and have yet to, to paraphrase Goldman Sachs’ Jim Covello, solve the kind of complex problems that would justify their cost…Since November 2022, ChatGPT has grown more sophisticated, faster at generations, capable of ingesting more data, but has yet to generate a true “killer app,” an iPhone-esque moment.
Furthermore, transformer-based models have become heavily-commoditized…As a result, we’re already seeing a race to the bottom…
As a result, OpenAI’s revenue might climb, but it’s likely going to climb by reducing the cost of its services rather than its own operating costs…
As discussed previously, OpenAI — like every single transformer-based model developer — requires masses of training data to make its models “better”…
Doing so is also likely going to lead to perpetual legal action…
And, to be abundantly clear, I am not sure there is enough training data in existence to get these models past the next generation. Even if generative AI companies were able to legally and freely download every single piece of text and visual media from the internet, it doesn’t appear to be enough to train these models…
And then there’s the very big, annoying problem — that generative AI doesn’t have a product-market fit at the scale necessary to support its existence.
To be clear, I am not saying generative AI is completely useless, or that it hasn’t got any product-market fit…
But what they are not, at this time, is essential.
Generative AI has yet to come up with a reason that you absolutely must integrate it, other than the sense that your company is “behind” if you don’t use AI. This wouldn’t be a problem if generative AI’s operating costs were a minuscule fraction — tens or hundreds of thousands of percent — of what they are today, but as things stand, OpenAI is effectively subsidizing the generative AI movement, all while dealing the problem that while cool and useful, GPT is only changing the world as much as the markets allow it to.
He has a lot more to say on this topic.
Oh, and that is before getting to the wee matter of energy, which he also analyzes in depth.
He then returns to laying out what OpenAI would need to do surmount this impediments, and why it looks wildly improbable.
Again, if OpenAI or AI generally is a topic of interest, be sure to read the entire Zitron post. And be sure to circulate it widely.
Source link