This project is based on LLaMA -7B and the base of the Chinese large language model generated by the incremental pre-training of the Chinese dataset .
Features
This project is a Chinese pre-training model obtained through enhanced pre-training (Further-Pretrain), providing huggingface version weights
Compared with the original LLaMA, this model is better at understanding and generating in Chinese Ability has been greatly improved, and outstanding results have been achieved in many downstream tasks, see the evaluation for details , providing command line tools to facilitate testing model effects