Language serves as an interface for LLMs to connect numerous AI models for solving complicated AI tasks!
We introduce a collaborative system that consists of an LLM as the controller and numerous expert models as collaborative executors (from HuggingFace Hub). The workflow of our system consists of four stages:
``` Task Planning: Using ChatGPT to analyze the requests of users to understand their intention, and disassemble them into possible solvable sub-tasks.
Model Selection: Based on the sub-tasks, ChatGPT invoke the corresponding models hosted on HuggingFace.
Task Execution: Executing each invoked model and returning the results to ChatGPT.
Response Generation: Finally, using ChatGPT to integrate the prediction of all models, and generate response.
```
Visit Official Website