HomeTutorials
【CN LLM】5.LLM Tutorial
73

【CN LLM】5.LLM Tutorial

AI  Learning Assistant No 1
AI Learning Assistant No 1
August 16th, 2023

LLM Fundamentals

  • HuggingLLM:

Address:https://github.com/datawhalechina/hugging-llm 

Introduction: Introduce the principle, use and application of ChatGPT, lower the threshold of use, and allow more interested non-NLP or algorithm professionals to use LLM to create value without barriers.

  • LLMsPracticalGuide:

Address:https://github.com/Mooler0410/LLMsPracticalGuide 

Introduction: This project provides a series of guides and a curated list of resources about LLM, including LLM development history, principles, examples, papers, etc. Tip Engineering Tutorial

  • Introductory LLM Course for Developers:

Address:https://github.com/datawhalechina/prompt-engineering-for-developers 

Introduction: A Chinese version of an introductory tutorial on large models, which revolves around Mr. Wu Enda's large model series courses, mainly including: Wu Enda's "ChatGPT Prompt Engineering for Developers" course Chinese version, Wu Enda's "Building Systems with the ChatGPT API" course Chinese version, Wu Enda Chinese version of "LangChain for LLM Application Development" course, etc.

  • Tip Engineering Guide:

Address:https://www.promptingguide.ai/zh

Introduction: Based on a strong interest in large language models, this project has written this new hint engineering guide, which introduces large language model-related paper research, study guides, models, lectures, reference materials, large language model capabilities, and others. Tools related to hint engineering.

  • awesome-chatgpt-prompts-zh:

Address:https://github.com/PlexPt/awesome-chatgpt-prompts-zh 

Introduction: This project is a ChatGPT Chinese training guide. Including guidelines for various scenarios, let chatgpt know how to listen to you, and provide some references for command construction.

LLM Application Tutorial

  • LangChain 🦜️🔗 Chinese website, learn LLM/GPT development with LangChain:

Address:https://www.langchain.asia

Introduction: The Chinese documentation of Langchain is maintained by two entrepreneurs in LLM, hoping to help friends who have just entered AI application development.

  • OpenAI Cookbook:

Address:https://github.com/openai/openai-cookbook 

Introduction: This project is an example and guidance provided by OpenAI for using the OpenAI API, including tutorials on how to build a question-answering robot, which can provide guidance for practitioners when developing similar applications.

  • Building large language model applications: application development and architecture design:

Address:https://github.com/phodal/aigc 

Introduction: This project has open sourced an open source e-book about the application of LLM in the real world, introducing the basics and applications of large language models, and how to build your own models. These include writing, developing, and managing prompts, exploring what the best large language models can bring, and pattern and architecture design for LLM application development.

LLM Practical Tutorial

  • LLMsNine story demon tower:

Address:https://github.com/km1994/LLMsNineStoryDemonTower 

Introduction: ChatGLM, Chinese-LLaMA-Alpaca, MiniGPT-4, FastChat, LLaMA, gpt4all and other actual combat and experience.

  • llm-action:

Address:https://github.com/liguodongiot/llm-action 

Introduction: This project provides a series of LLM practical tutorials and codes, including LLM training, reasoning, fine-tuning, and some technical articles related to LLM ecology.

LLM efficient fine-tuning tutorial

  • LLaMA Efficient Tuning:

Address:https://github.com/hiyouga/LLaMA-Efficient-Tuning 

Introduction: This project provides an easy-to-use PEFT-based LLaMA fine-tuning framework, which implements pre-training including full parameters, LoRA, QLoRA, instruction fine-tuning and RLHF, and supports LLaMA, BLOOM, Falcon, Baichuan, InternLM and other base models.

  • ChatGLM Efficient Tuning:

Address:https://github.com/hiyouga/ChatGLM-Efficient-Tuning 

Introduction: This project provides PEFT-based efficient ChatGLM fine-tuning, supports LoRA, P-Tuning V2, full-parameter fine-tuning and other modes, and adapts multiple fine-tuning datasets.

  • bert4torch:

Address:https://github.com/Tongjilibo/bert4torch 

Introduction: This project provides a large model training and deployment framework, including the current main open source large models, llama series, chatglm, bloom series, etc., and also gives examples of pre-training and fine-tuning.

For more content, please pay attention to:【CN LLM】Awesome Chines LLM included

Reprinted from View Original

Comments

no dataCoffee time! Feel free to comment