Nvidia releases CALM AI model: training virtual characters that can simulate 5 billion human actions
According to news on August 11, Nvidia recently collaborated with the Israel Institute of Technology, Bar-Ilan University and Simon Fraser University to publish a technical paper on the CALM AI model.
Nvidia said that the full name of CALM is Conditional Adversarial Latent Models (Conditional Adversarial Latent Models), which is used to train custom virtual characters.
Nvidia says that 10 days of training in the real world is equivalent to 10 years of training in the simulated world.
After training, the CALM AI model can simulate 5 billion human actions , covering walking, standing, sitting, running, fighting with swords and other human actions.
Nvidia says CALM can capture the complexity and variety of human motion and enable direct control of a character's motion.
Results show that CALM learns semantic motion representations, enabling higher-level task training to control generated motion and style regulation. After further training, users can control the characters with an interface similar to those found in video games.