**Wenda: A large-scale language model calling platform**
The design goal of this project is to achieve efficient content generation for a specific environment, while considering the limitations of computing resources of individuals and small and medium-sized enterprises, as well as knowledge security and privacy issues.
To achieve the goal, the platform integrates the following capabilities:
Knowledge base: Supports docking with local offline vector libraries, local search engines, online search engines, etc.
Multiple large language models: currently supported offline deployment models include chatGLM-6B\chatGLM2-6B, chatRWKV, llama series (not recommended for Chinese users), moss (not recommended), baichuan (need to be used with lora, otherwise the effect will be poor), Aquila -7B, InternLM, online API access to openai api and chatGLM-130b api.
Auto script: through the development of JavaScript scripts in the form of plug-ins, add-on functions for the platform, including but not limited to custom dialogue processes, access to external APIs, and online switching of LoRA models.
Other capabilities required for practical use: dialog history management, intranet deployment, simultaneous use by multiple users, etc.
Visit Official Website