What is the cost of artificial intelligence?
I have a free account on ChatGPT but, as this episode from the Marketplace Tech podcast (and many other sources) remind us, “just because a service doesn’t charge users doesn’t mean it doesn’t have costs”.
ChatGPT seemed to come out of nowhere last fall, followed soon by Google’s Bard and others of these so-called “large language model” AI services. But huge tech companies have been pouring billions of dollars into their development for years, which certainly isn’t free.
And, as Washington Post tech writer Will Oremus told the host, that investment is only the beginning.
So there’s a huge initial cost when you’re training the models, but then that cost in the long run is actually dwarfed by the cost of just running the models — the computing cost every time you use a chatbot or every time you call on one of these models. And it runs only on specific high-end chips called [graphics processing units]. A set of GPUs that can run AI applications can cost in the tens of thousands of dollars. And it’s not just chatbots. I mean, Microsoft is putting GPT-type software into everything from Microsoft Excel, Word, PowerPoint, Bing, Skype, you know, and so the proliferation of these large language models across all kinds of applications just racks up more and more and more computing cost.
In his Post article, Oremus says one analyst estimates that “ChatGPT was costing OpenAI some $700,000 per day in computing costs alone”. That’s a pretty expensive tech toy.
All that computing power comes at a cost to the environment as well.
Depending on the energy source used for training and its carbon intensity, training a 2022-era LLM emits at least 25 metric tons of carbon equivalents if you use renewable energy, as we did for the BLOOM model. If you use carbon-intensive energy sources like coal and natural gas, which was the case for GPT-3, this number quickly goes up to 500 metric tons of carbon emissions, roughly equivalent to over a million miles driven by an average gasoline-powered car.
And this calculation doesn’t consider the manufacturing of the hardware used for training the models, nor the emissions incurred when LLMs are deployed in the real world. For instance, with ChatGPT, which was queried by tens of millions of users at its peak a month ago, thousands of copies of the model are running in parallel, responding to user queries in real time, all while using megawatt hours of electricity and generating metric tons of carbon emissions. It’s hard to estimate the exact quantity of emissions this results in, given the secrecy and lack of transparency around these big LLMs.
Then there are the potential societal costs, some of which the European Union is trying to address. And about which the US government is making a lot of empty noise.
What does all this mean? I’m not sure. I’m no expert and this post is mostly a collection of ideas I’ve run across in the past few months. Stuff to consider when using all those “free” tech-based services.
Hey, isn’t that what a blog is for?
The photo is “borrowed” (again!) from the Post article. I guess I’m getting lazy about the creative use of images. :)
Leave a Reply