【CN LLM】2.1Vertical fine-tuning - Finance
Cornucopia: LLaMA fine-tuning model based on Chinese financial knowledge
Introduction: Open source the LLaMA-7B model that has undergone Chinese financial knowledge instruction fine-tuning/instruct-tuning. The instruction data set is constructed through Chinese financial public data + crawled financial data, and on this basis, LLaMA is fine-tuned to improve the Q&A effect of LLaMA in the financial field. Based on the same data, GPT3.5 API will be used to build high-quality data sets in the later stage, and high-quality instruction data sets will be further expanded on the Chinese knowledge map-finance.
Introduction: Open source the Chinese financial field open source corpus BBT-FinCorpus, the Chinese financial field knowledge-enhanced pre-trained language model BBT-FinT5 and the Chinese financial field natural language processing evaluation benchmark CFLEB.
XuanYuan: The first 100 billion-level Chinese financial dialogue model
Introduction: Xuanyuan is the first open-source 100-billion-level Chinese dialogue model in China, and it is also the first 100-billion-level open source dialogue model optimized for the Chinese financial field. On the basis of BLOOM-176B, Xuanyuan has carried out targeted pre-training and fine-tuning for Chinese general fields and financial fields. It can not only deal with problems in general fields, but also answer various questions related to finance, providing users with accurate, Comprehensive financial information and advice.
Introduction: This project has open sourced multiple financial large models, including ChatGLM-6B/ChatGLM2-6B+LoRA and LLaMA-7B+LoRA financial large models, and collected Chinese and English training data including financial news, social media, and financial reports.
For more content, please see:【CN LLM】Awesome Chines LLM included