Not long ago, Meta recently open-sourced the Llama 2 model, which is fully commercially available. It seems that Meta is bound to fight OpenAI (ClosedAI) to the end. Although Llama 2 has upgraded the original LlaMA model, it still does not support Chinese very well and needs to be customized in Chinese. So we decided to carry out the Chinese Sinicization of Llama 2:
⏳ Chinese-LlaMA2 : large-scale Chinese pre-training for Llama 2;
Note that following the corresponding license, we will release the complete and merged LoRA weights, and at the same time release the LoRA weights for the convenience of the open source community.
At the same time, we will create various vertical domain models around Chinese-LlaMA2:
Visit Official Website