Hey, so, remember when I mentioned LLaMA, Meta’s ChatGPT alternative? I thought it was exciting for two reasons:
- It requires less computing power for inference than similarly powerful models
- It is open source, at least in the sense that academic researchers have the ability to submit an application requesting access to the model.
Well, less than a week after it was released someone leaked the model weights online allowing anyone to download and run the model without pre-approval from Microsoft. Here is a Hugging Face Space where you can try out the smaller, 7 billion parameter LLaMA variant.
I am of two minds about this. First, I think this has the chance to kick off a “stable diffusion moment” for large language models. To that end, I am already seeing projects that tout enormous performance improvements. The story of 2022 onward will be that the open source community can contribute engineering developments to generative AI at a breathtaking speed when they are given the opportunity. This is certainly already the case with image generation and I think it is inevitable that this will also occur for text. Whether or not LLaMA is the basis for this is, to some extent, up to Meta now.
On the other end of the spectrum, this leak might have the consequence of AI development becoming less open. If large companies feel as though they can not safely share their results with select researchers, all of this work might remain where it is today: either locked inside of Google or accessible only through a paid API from OpenAI. And that is not the future I would like to see.