HomeTutorials
【CN LLM】2.1Vertical fine-tuning - Legal
199

【CN LLM】2.1Vertical fine-tuning - Legal

AI  Learning Assistant No 1
AI Learning Assistant No 1
August 16th, 2023

  • Xiezhi (LawGPT_zh): Chinese legal dialogue language model

Address: https://www.hayo.com/entry/5526

Introduction: The open source Chinese legal general model of this project is fine-tuned by the ChatGLM-6B LoRA 16-bit instruction. The data set includes the existing legal question answering data set and high-quality legal text question answering based on self-Instruct guided by laws and real cases, which improves the performance of the general language large model in the legal field, and improves the reliability and reliability of the model's answers. Professionalism.

  • LaWGPT: A Large Language Model Based on Chinese Legal Knowledge

Address: https://www.hayo.com/entry/5529 

Introduction: This series of models expands the legal field-specific vocabulary and large-scale Chinese legal corpus pre-training on the basis of general Chinese base models (such as Chinese-LLaMA, ChatGLM, etc.), which enhances the basic semantic understanding of large models in the legal field ability. On this basis, construct a dialogue question-and-answer data set in the legal field and a Chinese judicial examination data set for fine-tuning of instructions, which improves the model's ability to understand and execute legal content.

  • LexiLaw: Chinese legal model

Address: https://www.hayo.com/entry/5532

Introduction: LexiLaw is a fine-tuned Chinese legal model based on ChatGLM-6B, which is fine-tuned on the data set in the legal field. This model aims to provide legal practitioners, students, and ordinary users with accurate and reliable legal consulting services, including consulting on specific legal issues, or inquiries about legal terms, case analysis, and interpretation of regulations.

  • Lawyer LLaMA: Chinese legal LLaMA

Address: https://www.hayo.com/entry/5535

Introduction: Open source a series of instruction fine-tuning data in the legal field and the parameters of the Chinese legal large model based on LLaMA training. Lawyer LLaMA first performed continual pretraining on a large-scale legal corpus. On this basis, with the help of ChatGPT, we collected a batch of analysis of the objective questions of the China National Unified Legal Professional Qualification Examination (hereinafter referred to as the law test) and answers to legal consultations, and used the collected data to fine-tune the model to let the model learn The ability to apply legal knowledge to specific situations.

  • HanFei

Address:  https://www.hayo.com/entry/5469

Introduction: HanFei-1.0 (Han Fei) is the first full-parameter training large-scale legal model in China, with a parameter quantity of 7b. Its main functions include: legal questions and answers, multiple rounds of dialogue, writing articles, searching, etc.

  • ChatLaw-Law Mockup

Address:https://www.hayo.com/entry/4641

Introduction: A series of large models in the legal field open sourced by Peking University, including ChatLaw-13B (trained based on Jiang Ziya Ziya-LLaMA-13B-v1), ChatLaw-33B (trained based on Anima-33B, greatly improved logical reasoning ability ), ChatLaw-Text2Vec, using a data set made of 93w judgment cases to train a similarity matching model based on BERT, which can match the user's question information with the corresponding legal articles.

  • lychee_law-law knowledge

Address: https://github.com/davidpig/lychee_law 

Introduction: This project is jointly developed by the team of Saarland University in Germany and the team of Nanjing University in China. It open source a series of large models in the Chinese judicial field, such as Law-GLM-10B: based on the GLM-10B model, fine-tuning instructions on 30GB Chinese legal data to get of.

For more content, please see:【CN LLM】Awesome Chines LLM included

Reprinted from View Original

Comments

no dataCoffee time! Feel free to comment