Yesterday, SemiAnalysis shared a leaked document that purports to be an internal memo written by an engineer at Google:
We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us… While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.
The anonymous engineer attributes many of these open source developments to the leak of Meta’s LLaMA model:
At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.
A tremendous outpouring of innovation followed, with just days between major developments… Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.
Indeed, while open source language models still lag behind state of the art closed models, the speed of development is unparalleled. The quality gap has already closed with text-to-image models and there is no reason to think the same won’t happen with LLMs.
To be clear, I think centralized models will remain important, if only for thin clients where compute power is limited. Who knows, though, it is always possible that open source development ends up driving a commoditization of language models where there is no reason to call OpenAI’s API over a random AWS endpoint.
The author concludes that, for this reason, Google should contribute to the open source community instead of attempting to compete with it:
This recent progress has direct, immediate implications for our business strategy. Who would pay for a Google product with usage restrictions if there is a free, high quality alternative without them?
[…]
The more tightly we control our models, the more attractive we make open alternatives. Google and OpenAI have both gravitated defensively toward release patterns that allow them to retain tight control over how their models are used. But this control is a fiction. Anyone seeking to use LLMs for unsanctioned purposes can simply take their pick of the freely available models.
Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.
A pivot towards open source is a great strategy that would clearly differentiate Google from OpenAI at a time when Google sorely needs it. But remember, this memo was written by a Google engineer, not someone from Google’s leadership—the higher-ups at Google appear to be moving in the opposite direction.
Nitasha Tiku, Washington Post:
In February, Jeff Dean, Google’s longtime head of artificial intelligence, announced a stunning policy shift to his staff: They had to hold off sharing their work with the outside world.
For years Dean had run his department like a university, encouraging researchers to publish academic papers prolifically; they pushed out nearly 500 studies since 2019
[…]
Things had to change. Google would take advantage of its own AI discoveries, sharing papers only after the lab work had been turned into products, Dean said, according to two people with knowledge of the meeting
If leadership wants to be proactive, the clock is ticking. During Meta’s most recent earnings call, Mark Zuckerberg made it clear that they intend to embrace an open source approach to AI moving forward.
Mark:
Right now most of the companies that are training large language models have business models that lead them to a closed approach to development. I think there’s an important opportunity to help create an open ecosystem. If we can help be a part of this, then much of the industry will standardize on using these open tools and help improve them further.
[…]
I mentioned LLaMA before and I also want to be clear that while I’m talking about helping contribute to an open ecosystem, LLaMA is a model that we only really made available to researchers and there’s a lot of really good stuff that’s happening there. But a lot of the work that we’re doing, I think, we would aspire to and hope to make even more open than that. So, we’ll need to figure out a way to do that.
If Google remains in stasis for much longer, an open source first philosophy would no longer be unique—it would look like Google decided to start copying Meta instead of OpenAI.