Um dos comentários mais interessantes que li sobre o assunto foi nessa discussão no Hacker News:
I have come to two conclusions about the GPT technologies after some weeks to chew on this:
- We are so amazed by its ability to babble in a confident manner that we are asking it to do things that it should not be asked to do. GPT is basically the language portion of your brain. The language portion of your brain does not do logic. It does not do analyses. But if you built something very like it and asked it to try, it might give it a good go.
In its current state, you really shouldn't rely on it for anything. But people will, and as the complement of the Wile E. Coyote effect, I think we're going to see a lot of people not realize they've run off the cliff, crashed into several rocks on the way down, and have burst into flames, until after they do it several dozen times. Only then will they look back to realize what a cockup they've made depending on these GPT-line AIs.
To put it in code assistant terms, I expect people to be increasingly amazed at how well they seem to be coding, until you put the results together at scale and realize that while it kinda, sorta works, it is a new type of never-before-seen crap code that nobody can or will be able to debug short of throwing it away and starting over.
This is not because GPT is broken. It is because what it is is not correctly related to what we are asking it to do.
- My second conclusion is that this hype train is going to crash and sour people quite badly on "AI", because of the pervasive belief I have seen even here on HN that this GPT line of AIs is AI. Many people believe that this is the beginning and the end of AI, that anything true of interacting with GPT is true of AIs in general, etc.
So people are going to be even more blindsided when someone develops an AI that uses GPT as its language comprehension component, but does this higher level stuff that we actually want sitting on top of it. Because in my opinion, it's pretty clear that GPT is producing an amazing level of comprehension of what a series of words means. The problem is, that's all it is really doing. This accomplishment should not be understated. It just happen to be the fact that we're basically abusing it in its current form.
What it's going to do as a part of an AI, rather than the whole thing, is going to be amazing. This is certainly one of the hard problems of building a "real AI" that is, at least to a first approximation, solved. Holy crap, what times we live in.
But we do not have this AI yet, even though we think we do.
(link para o comentário)
Sendo mais específico, falo sobre a parte "Está bastante claro que o GPT está produzindo um nível incrível de compreensão do que significa uma série de palavras. (...) Acontece que estamos basicamente abusando dele em sua forma atual." Pensar por esse lado deixa bem claro o por quê dele ter as "alucinações".
Também levantaram um exemplo (abaixo) na discussão sobre a IA do Bing alucinando sobre o vencedor do Super Bowl LVI antes do evento acontecer. A IA disse que o evento já tinha acontecido, mencionou a data errada, placar e vencedor, além de fontes para as afirmações.