Stanford, 7 billion parameters, downloadable
Alpaca was announced in March 2023. It’s fine-tuned from Meta’s LLaMA 7B model that we described above and is trained on 52k instruction-following demonstrations.
One of the goals of this model is to help the academic community engage with the models by providing an open-source model that rivals OpenAI’s GPT-3.5 (text-davinci-003) models. To this end, Alpaca has been kept small and cheap (fine-tuning Alpaca took 3 hours on 8x A100s which is less than $100 of cost) to reproduce and all training data and techniques have also been released.
Alpaca wins our pick for the model to use if you only want it for research/personal projects as the license explicitly prohibits commercial use. However combined with techniques like LoRA this model can be fine-tuned on consumer grade GPUs and can even be run (slowly) on a raspberry pi.
Visit Official Website