LaWGPT is a series of open source large language models based on Chinese legal knowledge.
This series of models is based on the general Chinese base model (such as Chinese-LLaMA, ChatGLM, etc.), which expands the special vocabulary in the legal field and pre-trains large-scale Chinese legal corpus, which enhances the ability of large models to Basic semantic understanding in the legal field. On this basis, construct a dialogue question-and-answer data set in the legal field and a Chinese judicial examination data set for fine-tuning of instructions, which improves the model's ability to understand and execute legal content.
This project continues to develop, and the data sets and series of models in the legal field will be open sourced one after another, so stay tuned.