I got access to Google’s Bard language model late last week and I have been spending some time testing it out throughout the past few days.
Like Bing AI (and soon, ChatGPT), Bard is able to preform a web search for factual information lookup. To its credit, Bard seems to do a better job summarizing and integrating this information into its answers when compared to Bing. There is a catch, though, Bard very rarely cites its sources. This almost defeats the purpose of its web lookup capabilities altogether — if you are going to go to the trouble of aggregating outside information I would like to be able to check your work.
Bard is less “steerable” than ChatGPT. By that, I mean it is more difficult to direct its responses in particular ways — “limit all of your responses to only one word”, “always respond in the Socratic style”, “each word of your answer must begin with the letter W”, etc. This is the magic behind ChatGPT — it is what transformed it into an “intelligent assistant” from the “glorified autocomplete” of GPT-3. OpenAI’s InstructGPT paper has more information on the approach they took towards this.
Overall, I think Bard would have been a serious contender if it had launched in December of last year, around the time Google issued their infamous “code red” memo. Bard is comparable — a little better in some ways, a little worse in others — than the original GPT-3.5 iteration of ChatGPT. If Bard had launched earlier — before Bing AI, the ChatGPT API, GPT-4, and ChatGPT Plugins — it would have been a serious contender. At this point, though, it feels like Google is still playing catchup to where OpenAI was last year. That is not a great place to be.