Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.
This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.
The changes from alpaca.cpp have since been upstreamed in llama.cpp.
Download the zip file corresponding to your operating system from the latest release. On Windows, download
alpaca-win.zip, on Mac (both Intel or ARM) download
alpaca-mac.zip, and on Linux (x64) download
Download ggml-alpaca-7b-q4.bin and place it in the same folder as the
chat executable in the zip file. There are several options:
Once you’ve downloaded the model weights and placed them into the same directory as the
chat.exe executable, run:
The weights are based on the published fine-tunes from
alpaca-lora, converted back into a pytorch checkpoint with a modified script and then quantized with llama.cpp the regular way.
git clone https://github.com/antimatter15/alpaca.cpp cd alpaca.cpp
make chat ./chat
git. If you’ve never used git before, consider a GUI client like https://desktop.github.com/
https://github.com/antimatter15/alpaca.cppin as the URL)
cmake . cmake –build . –config Release
ggml-alpaca-7b-q4.binin the main Alpaca directory.
--n 8as preferred onto the same line)
This combines Facebook’s LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang’s implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev’s Interactive Mode for llama.cpp. Inspired by Simon Willison’s getting started guide for LLaMA. Andy Matuschak’s thread on adapting this to 13B, using fine tuning weights by Sam Witteveen.
Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.
Visit Official Website