It was, unfortunately, inevitable: Bing AI has been tamed.

From a Microsoft blog post:

We want to share a quick update on one notable change we are making to the new Bing based on your feedback.

As we mentioned recently, very long chat sessions can confuse the underlying chat model in the new Bing.  To address these issues, we have implemented some changes to help focus the chat sessions.   

Starting today, the chat experience will be capped at 50 chat turns per day and 5 chat turns per session. A turn is a conversation exchange which contains both a user question and a reply from Bing… After a chat session hits 5 turns, you will be prompted to start a new topic. At the end of each chat session, context needs to be cleared so the model won’t get confused.

It’s becoming increasingly likely that the first “killer app” for generative AI will come from a previously-unknown startup. Microsoft, Google, and OpenAI all have too much to loose from controversies like the ones we saw last week with Bing AI. It is only when a company has nothing to loose that they are able to push through the awkward phase of imitation, iterate, and discover truly paradigm-shifting technologies. While Microsoft “doesn’t have anything to loose” when it comes to Bing.com market share, as the second largest company in the world it certainly has quite a lot to loose overall.

Something that this saga has made clear is, for a personality-driven chat experience to become a viable and enduring product, these models will need to be individually personalized and locally controllable. A company remotely altering an AI model’s persona after you have developed an emotional attachment to it it will be devastating. Just look at the /r/bing subreddit! People are genuinely upset; and that is after less than a week of interacting with an unofficial, jailbroken mode hidden inside of a beta-test search engine chat bot. Imagine if this was a use-case that was actively encouraged and developed for!

Ross Douthat at The New York Times:

What [Kevin] Roose and [Ben] Thompson found waiting underneath the friendly internet butler’s surface was a character called Sydney, whose simulation was advanced enough to enact a range of impulses, from megalomania to existential melancholy to romantic jealousy.

[…]

You wouldn’t go to this A.I. for factual certainty or diligent research. Instead, you’d presume it would get some details wrong, occasionally invent or hallucinate things, take detours into romance and psychoanalysis and japery and so on — and that would be the point.

But implicit in that point is the reality that this kind of creation would inevitably be perceived as a person by most users, even if it wasn’t one… From that perspective, the future in which A.I. develops nondestructively, in a way that’s personalized to the user, looks like a distinctive variation on the metaverse concept that Mark Zuckerberg’s efforts have so far failed to bring to life: A wilderness of mirrors showing us the most unexpected versions of our own reflections and a place where an entire civilization could easily get lost.