In an article this week on Stratechery, Ben Thompson does a great job of articulating something I have been chewing on for a while now, but unable to find the right words myself. High-profile blunders from both Google’s Bard and Bing AI have sparked lots of discussion on the accuracy of large language model’s output; in particular, whether the fact that LLMs make factual errors disqualifies them from being used as serious tools. This has never been a convincing argument, to me. No single knowledge source — be it parents, professors, or Wikipedia — is infallible. Your job, when researching a new topic, is to use prior-knowledge and common sense to compile and vet sources in order to carve out some semblance of a consensus. Relying solely on a single source — LLM or otherwise — is never smart.

Ben Thompson:

One final point: it’s obvious on an intellectual level why it is “bad” to have wrong results. What is fascinating to me, though, is that I’m not sure humans care… After all, it’s not as if humans are right 100% of the time, but we like talking to and learning from them all the same; the humanization of computers, even in the most primitive manifestation we have today, may very well be alluring enough that good enough accuracy is sufficient to gain traction.