entry-slick
About OpenMMLab

OpenMMLab builds the most influential open-source computer vision algorithm system in the deep learning era. It aims to

provide high-quality libraries to reduce the difficulties in algorithm implementation

create efficient deployment toolchains targeting a variety of backends and devices

build a solid foundation for computer vision research and development

bridge the gap between academic research and industrial applications with full-stack toolchains

Visit Official Website

https://openmmlab.com/

OpenMMLab
🥳#OpenCompass is on the verge of a significant version update, and we need your insights.
😘Participate in our survey and help us tailor the experience to your needs. Your opinion is invaluable to us.
🥰Survey Link:
t.co/Xhk6aSXfTY
🤗GitHub:
GitHub - open-c...
link
GitHub - open-compass/opencompass: OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets. - GitHub - open-compass/opencompass: OpenCompass is a...
Share
OpenMMLab
🤗👍
-------------
From @RidgeRun.ai:At RidgeRun we love the @OpenMMLab projects! Their rich infrastructure allows us to test SOTA models quickly and easily.

Here’s a step by step guide on how to enable training plots visualization on #MMAction2, a framework for video understanding.
Share
Community Posts
OpenMMLab
🥳#OpenCompass is on the verge of a significant version update, and we need your insights.
😘Participate in our survey and help us tailor the experience to your needs. Your opinion is invaluable to us.
🥰Survey Link:
t.co/Xhk6aSXfTY
🤗GitHub:
GitHub - open-c...
link
GitHub - open-compass/opencompass: OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets. - GitHub - open-compass/opencompass: OpenCompass is a...
Share
OpenMMLab
🤗👍
-------------
From @RidgeRun.ai:At RidgeRun we love the @OpenMMLab projects! Their rich infrastructure allows us to test SOTA models quickly and easily.

Here’s a step by step guide on how to enable training plots visualization on #MMAction2, a framework for video understanding.
Share
OpenMMLab
🥳The recording of our @ICCVConference tutorial on Introduction to #OpenDataLab: An Open Data Platform for Artificial Intelligence is now available! #ICCV2023
👉Introduction to...
image
Share
OpenMMLab
👍
-------------
From @OpenAI:We're rolling out new features and improvements that developers have been asking for:

1. Our new model GPT-4 Turbo supports 128K context and has fresher knowledge than GPT-4. Its input and output tokens are respectively 3× and 2× less expensive than GPT-4. It’s available now to…
Share
OpenMMLab
🥰The recording of our @ICCVConference tutorial on Learning to Generate, Edit, and Enhance Images and Videos with #MMagic is now available! #ICCV2023
👉Learning to Gen...
image
Share
OpenMMLab
🥳The recording of our @ICCVConference tutorial on #MMDetection: from General Object Detection to Multi-modal Agent is now available! #ICCV2023
🤗MMDetection: fr...
image
Share
OpenMMLab
🥳We are delighted to announce that our evaluation toolkit, OpenCompass, has been recommended by Meta (t.co/uGVC2I0gOr). @AIatMeta
☺️Welcome to try OpenCompass for your research and production of LLM.
👉GitHub - open-c...
#LLM #LLaMA #Meta #OpenAI #OpenCompass
image
Share
OpenMMLab
🥳The recording of our @ICCVConference tutorial on Learning Fundamental Models with #OpenMMLab is now available! #ICCV2023
👉Learning Fundam...
image
Share
OpenMMLab
🔥#BotChat: Evaluating #LLMs' Capabilities of Having Multi-Turn Dialogues
🤖BotChat is benchmark for evaluating the multi-turn chatting capabilities of large language models
- Project: t.co/gzlspZ8zl3
- Paper: t.co/CHouu194YJ
- Code: t.co/hk9OpNX5PA
image
Share
OpenMMLab
☺️The recording of our @ICCVConference tutorial on OpenMMLab: Open-source Platform for Vision, Language and Generative AI is now available! #ICCV2023
👉OpenMMLab: Open...
image
Share
OpenMMLab
🥳 Introducing Seal 🦭, a novel framework that leverages vision foundation models for consistent spatial and temporal self-supervised learning on large-scale point clouds. @ldkong1205
🤗 Venue: #NeurIPS2023 ✨Spotlight✨ @NeurIPSConf
🥰 Paper: t.co/7ECHRxqSxg
image
Share
OpenMMLab
🥳#MMagic Release v1.1.0!
-Support #ViCo, a new SD personalization method.
-Support #AnimateDiff, a popular text2animation method.
-Support #SDXL.
-Support #DragGAN implementation with MMagic.
-Support for #FastComposer.
👉GitHub - open-m...
video
00:02
Share
OpenMMLab
#MMPreTrain Release v1.1.0!
🤗Add minigpt4 gradio demo and training script.
🤗Support self-supervised algorithm #DINO.
🤗Support #CLIP zero-shot classification.
👉Welcome to try at GitHub - open-m...
image
Share
OpenMMLab
🥳#MMEngine Release v0.9.0!
🤗Support training large models with #ColossalAI @HPCAITech to maximize training speed and minimize GPU memory usage.
🤗Support multiple visualization backends, including NeptuneVisBackend, DVCLiveVisBackend and AimVisBackend.
👉GitHub - open-m...
link
GitHub - open-mmlab/mmengine: OpenMMLab Foundational Library for Training Deep Learning Models
OpenMMLab Foundational Library for Training Deep Learning Models - GitHub - open-mmlab/mmengine: OpenMMLab Foundational Library for Training Deep Learning Models
Share
OpenMMLab
🥳#InternLM-#XComposer: A Vision-Language Large Model for Advanced Text-image Comprehension and Composition.
🥰Compose coherent and contextual articles that seamlessly integrate images.
🤗Demo @huggingface @Gradio: InternLM XCompo...
🤗GitHub: GitHub - Intern...
video
01:01
Share
OpenMMLab
Done offers personalized ADHD treatment. Book your appointment now.
Share
OpenMMLab
🤗
-------------
From @DAIR.AI:Top ML Papers of the Week (Oct 2 - Oct 8):

- StreamingLLM
- Analogical Prompting
- The Dawn of LMMs
- Neural Developmental Programs
- LLMs Represent Space and Time
- Retrieval meets Long Context LLMs
...

----

1/ LLMs Represent Space and Time - discovers that LLMs learn linear…
Share
OpenMMLab
🥳We will have Bin Wang give an overall introduction to OpenDataLab, covering the open dataset platform, open-source data processing toolkits and a dataset description language at #ICCV2023.
🤗Time: Oct 2nd, 11:15-11:45 AM | Room P02 @ICCVConference
image
Share
OpenMMLab
🥳At #ICCV2023, @zengyh1900 will introduce the state-of-the-art open-source toolbox #MMagic.
😊Don't miss out if you’re into advanced image and video generation, editing, and enhancement.
🤗Time: Oct 2nd, 10:45-11:15 AM | Room P02 @ICCVConference
image
Share
OpenMMLab
🥳We're excited to have @wenweiz97 introduce detection toolkits and multi-modal agents at #ICCV2023.
😉 Learn how to efficiently tackle object detection projects with computer vision toolkits and agents.
🤗Time: Oct 2nd, 10:00-10:30 AM | Room P02 @ICCVConference
image
Share