\| English \| 中文 \| 注意事项/NOTEs
This is the repo for the Chinese-Vicuna project, which aims to build and share an instruction-following Chinese LLaMA model which can run on a single Nvidia RTX-2080TI, that why we named this project Vicuna
, small but strong enough !
The repo contains:
This is our instruction demo (with beam-size=4, so you will see 4 process output in the meantime):
tmp.mp4
This is our multi-turn instruction demo (with beam-size=4, so you will see 4 process output in the meantime):
tmp.mp4
chat.py
( Now support 4 generation mode in stream mode/typewriter style: beam search, greedy, sample, beam sample ; We also add cancel button for regeneration )Visit Official Website