HomeAI Tools
Chinese-LLaMA-Alpaca-2

Chinese-LLaMA-Alpaca-2

entry-slick
About Chinese-LLaMA-Alpaca-2

This project is developed based on the commercially available large model Llama-2 released by Meta. It is the second phase of the Chinese LLaMA&Alpaca large model project. The Chinese LLaMA-2 base model and the Alpaca-2 instruction fine-tuning large model are open sourced. These models expand and optimize the Chinese vocabulary on the basis of the original Llama-2 , use large-scale Chinese data for incremental pre-training, and further improve the basic semantics and command understanding of Chinese. Performance improvements. The related model supports FlashAttention-2 training , supports 4K context and can be extended up to 18K+ through the NTK method.

Main content of this project:

- 🚀 A new version of the Chinese vocabulary has been expanded for the Llama-2 model, and the Chinese LLaMA-2 and Alpaca-2 large models have been open sourced - 🚀 Open source pre-training scripts and instruction fine-tuning scripts, users can further train the model as needed - 🚀 Use the CPU/GPU of a personal computer to quickly perform large-scale model quantization and deployment experience locally - 🚀 Support 🤗transformers , llama.cpp , text-generation-webui , LangChain , privateGPT , vLLM and other LLaMA ecology - currently open source models: Chinese-LLaMA-2 (7B/13B), Chinese-Alpaca-2 ( 7B/13B) (For larger models, please refer to the first phase of the project )

Visit Official Website

https://github.com/ymcui/Chinese-LLaMA-Alpaca-2

Community Posts
no data
Nothing to display