【CN LLM】2.1Vertical fine-tuning - Legal
Xiezhi (LawGPT_zh): Chinese legal dialogue language model
Introduction: The open source Chinese legal general model of this project is fine-tuned by the ChatGLM-6B LoRA 16-bit instruction. The data set includes the existing legal question answering data set and high-quality legal text question answering based on self-Instruct guided by laws and real cases, which improves the performance of the general language large model in the legal field, and improves the reliability and reliability of the model's answers. Professionalism.
LaWGPT: A Large Language Model Based on Chinese Legal Knowledge
Introduction: This series of models expands the legal field-specific vocabulary and large-scale Chinese legal corpus pre-training on the basis of general Chinese base models (such as Chinese-LLaMA, ChatGLM, etc.), which enhances the basic semantic understanding of large models in the legal field ability. On this basis, construct a dialogue question-and-answer data set in the legal field and a Chinese judicial examination data set for fine-tuning of instructions, which improves the model's ability to understand and execute legal content.
LexiLaw: Chinese legal model
Introduction: LexiLaw is a fine-tuned Chinese legal model based on ChatGLM-6B, which is fine-tuned on the data set in the legal field. This model aims to provide legal practitioners, students, and ordinary users with accurate and reliable legal consulting services, including consulting on specific legal issues, or inquiries about legal terms, case analysis, and interpretation of regulations.
Lawyer LLaMA: Chinese legal LLaMA
Introduction: Open source a series of instruction fine-tuning data in the legal field and the parameters of the Chinese legal large model based on LLaMA training. Lawyer LLaMA first performed continual pretraining on a large-scale legal corpus. On this basis, with the help of ChatGPT, we collected a batch of analysis of the objective questions of the China National Unified Legal Professional Qualification Examination (hereinafter referred to as the law test) and answers to legal consultations, and used the collected data to fine-tune the model to let the model learn The ability to apply legal knowledge to specific situations.
Introduction: HanFei-1.0 (Han Fei) is the first full-parameter training large-scale legal model in China, with a parameter quantity of 7b. Its main functions include: legal questions and answers, multiple rounds of dialogue, writing articles, searching, etc.
Introduction: A series of large models in the legal field open sourced by Peking University, including ChatLaw-13B (trained based on Jiang Ziya Ziya-LLaMA-13B-v1), ChatLaw-33B (trained based on Anima-33B, greatly improved logical reasoning ability ), ChatLaw-Text2Vec, using a data set made of 93w judgment cases to train a similarity matching model based on BERT, which can match the user's question information with the corresponding legal articles.
Introduction: This project is jointly developed by the team of Saarland University in Germany and the team of Nanjing University in China. It open source a series of large models in the Chinese judicial field, such as Law-GLM-10B: based on the GLM-10B model, fine-tuning instructions on 30GB Chinese legal data to get of.
For more content, please see:【CN LLM】Awesome Chines LLM included