HomeAI Tools
Alpaca-LoRA

Alpaca-LoRA

1 liked
entry-slick
About Alpaca-LoRA

🦙🌲🤏 Alpaca-LoRA

  • 🤗 Try the pretrained model out here, courtesy of a GPU grant from Huggingface!
  • Users have created a Discord server for discussion and support here
  • 4⁄14: Chansung Park’s GPT4-Alpaca adapters: #340

This repository contains code for reproducing the Stanford Alpaca results using low-rank adaptation (LoRA). We provide an Instruct model of similar quality to text-davinci-003 that can run on a Raspberry Pi (for research), and the code is easily extended to the 13b, 30b, and 65b models.

In addition to the training code, which runs within hours on a single RTX 4090, we publish a script for downloading and inference on the foundation model and LoRA, as well as the resulting LoRA weights themselves. To fine-tune cheaply and efficiently, we use Hugging Face’s PEFT as well as Tim Dettmers’ bitsandbytes.

Without hyperparameter tuning, the LoRA model produces outputs comparable to the Stanford Alpaca model. (Please see the outputs included below.) Further tuning might be able to achieve better performance; I invite interested users to give it a try and report their results.

Visit Official Website

https://github.com/tloen/alpaca-lora

Community Posts
no data
Nothing to display