About XrayGLM

Recently, large language models (LLMs) in general domains, such as ChatGPT, have achieved remarkable success in following instructions and generating human-like responses, and this success has indirectly promoted the research and development of multimodal large models, such as However, such multimodal large models rarely appear in research in the medical field, hindering the development of related research. Although visual-med-alpaca has made some very effective work on large medical multimodal models, its data are in English diagnostic reports, which is not conducive to promoting the research and development of large medical multimodal models in the Chinese field. To this end, we developed XrayGLM to solve the above problems. XrayGLM has shown extraordinary potential in medical imaging diagnosis and multi-round interactive dialogue.

Community Posts
no data
Nothing to display