He’s right, you know. Kurzweil’s techno-optimism is getting proved right about AI, as the pace of progress accelerates. Putting aside predictions of when we get to AGI or the singularity, we are in the dawn of an era of cheap powerful AI. Powerful means the kind of AI that can pass the Turing Test, the GMATs, and write essays and legal briefs. Cheap has come with the recent OpenAI announcement, offering the chatGPT API endpoint for 0.002/1,000 tokens, 1/10th the price of GPT-3. There’s your sign.
On the heels of OpenAI’s announcement, Max Woolf makes the point that “ChatGPT's API is So Good and Cheap, It Makes Most Text Generating AI Obsolete.” The latest OpenAPI pricing is 0.002/1,000 tokens for the latest and greatest GPT-3.5 versus 0.06/1,000 tokens for GPT-3 beta in 2021. It’s a 30X price reduction in 2 years for a model that is much better. This price point obsoletes GPT-3 API, makes ChatGPT+ redundant, and threatens myriad customized API model competitors. Max notes:
But in the process of making the ChatGPT API so cheap, they made their $20/month subscription to ChatGPT+ redundant. … OpenAI’s solution for models requiring more specific needs was finetuning a smaller and much cheaper variant of GPT-3, such as the babbage model which I used to train a blog post title optimizer. However, the ChatGPT API is so cheap that it’s still cheaper than a finetuned babbage ($0.0020/1k tokens for ChatGPT vs. $0.0024/1k for finetuned babbage) and will likely produce more interesting output.
This upends the economics of other AI solutions offered as API endpoints, such as Cohere and AI21 Labs. It’s a shot across the bow for other AI research teams and companies with LLMs on deck. Anthropic’s Claude is being opened up to startups, while Google Bard has released to “external testers”. Google and other groups have had LLMs and multi-modal Foundation Models at least as good as GPT-3 / chatGPT, but they have been a lot more cautious about how open to be and in what way to release them.
OpenAI is reaping benefits of first-mover status from the caution of others, building mind-share and ecosystems around their models. Already, Snapchat, Shopify and others are announcing solutions on the latest OpenAI APIs. Any company holding back will lose out on the opportunities that OpenAI is grasping. This should be concentrating minds at Google and elsewhere.
More important than who wins the LLM arms race is this massive reduction in the cost of AI. AI is now cheap. It’s gotten 100 times cheaper and more powerful in the last 5 years. Prediction: AI will get 100 times cheaper and better over the next 5 years.
The enormous drop in the price of high-value LLMs models opens up the AI land rush and disruptive innovation in a number of industries:
For specific applications like litigation or risk analysis, if a company finds a way to automate a significant chunk of work with ChatGPT, it may make it seriously difficult to be price-competitive against other vendors. For once, this price dip is unlikely to come with a fall in quality of output, as only the mundane non-strategic parts of the process will be ChatGPT-automatable.
AI is right now at the kind of inflection point that the internet was in 1995. In the 1990s, it was the fantastic drop in the price of bandwidth that enabled the enormous opportunity for that communications revolution. Here, it’s the price per insight embedded in an API call that’s plummeting.
While what’s happening already is very exciting, we’ve only just begun to see the changes due to the rise of AI capabilities. Those changes will ripple through our entire economy, as AI upends and changes everything we do.