Now that Sydney, Microsoft’s AI search assistant, has receded from view after a spectacular rise, I thought it might be a good time to check in with Google’s alternative: Bard.

When we last heard from Bard, Google had just lost $100 billion in market value after factual errors were discovered in marketing materials for the AI assistant. Factual errors seem like a quaint issue now, don’t they?

Well, it sounds like, throughout the past week, Google has taken a step back and tried to learn what it can from the whole Sydney saga. One outcome is that they are trying to do some last-minute RLHF.

Jennifer Elias at CNBC:

Prabhakar Raghavan, Google’s vice president for search, asked staffers in an email on Wednesday to help the company make sure its new ChatGPT competitor gets answers right.

Staffers are encouraged to rewrite answers on topics they understand well.

[…]

To try and clean up the AI’s mistakes, company leaders are leaning on the knowledge of humans. At the top of the do’s and don’ts section, Google provides guidance for what to consider “before teaching Bard.”

Google instructs employees to keep responses “polite, casual and approachable.” It also says they should be “in first person,” and maintain an “unopinionated, neutral tone.”

… “don’t describe Bard as a person, imply emotion, or claim to have human-like experiences,” the document says.

It’s not surprising but it is disappointing that Google appears to be taking the cold, analytical, ChatGPT-like approach with its new assistant. Maybe our best hope for a highly personal, Sydney-like model lies with OpenAI after all.