This is the 4th iteration English supervised-fine-tuning (SFT) model of the Open-Assistant project. It is based on a Pythia 12B that was fine-tuned on human demonstrations of assistant conversations collected through the https://open-assistant.io/ human feedback web app before March 25, 2023.
Developed by: Open-Assistant Contributors
Model type: Transformer-based Language Model
Language: English
Finetuned from: EleutherAI / pythia-12b-deduped
Code: Open-Assistant/model/model_training
Demo: Continuations for 250 random prompts
License: Apache 2.0
Contact: Open-Assistant Discord
Visit Official Website
https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5