Unlike many technologies that claim to be revolutionary, there is a real case that Large Language Models (LLMs) like OpenAI’s GPT really will change the world.
When I first tried ChatGPT I found it impressive, but not useful. The ability to write an academic style essay was something software had never been previously capable of doing. However it wasn’t a tool I needed, and while sure to improve, was still a little rough around the edges.
As a replacement for a search engine, ChatGPT was less impressive. It was slightly worse, and a year and a half out of date.
What really blew me away with using a LLM was trying out GitHub Copilot. Copilot isn’t always correct, but it is incredible how often it is, and how often it can make sense of unique code. It has been genuinely useful for doing my job.
But the larger argument is for the things that LLMs are not yet good at, but may be better at in the future. For example consider language translations and chess. Currently an LLM will be happy to attempt either of these tasks, but will not do too great at either of them. If you want a computer to play chess you are much better off using a dedicated chess engine like Stockfish. If you want to preform a translation you are much better off using a program like DeepL.
But the catch is that Stockfish can not perform translations, and that DeepL can not play chess. If LLM tech improves rapidly due to receiving funding and attention due to some random thing that it is good at like image recognition or natural language recognition than the tech stack as a whole might get better much quicker than the efforts put into DeepL or Stockfish.
In many ways this could look like a repeat of how general purpose computing improved so quickly that eventually all computing became general purpose computing. If the LLMs can improve enough, perhaps in the future all AI will be a LLM based AI.