entry-slick
entry-slick
entry-slick
About ImageBind

ImageBind: One Embedding Space To Bind Them All

FAIR, Meta AI

Rohit Girdhar , Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra*

To appear at CVPR 2023 ( Highlighted paper)

[ Paper] [ Blog] [ Demo] [ Supplementary Video] [ BibTex]

PyTorch implementation and pretrained models for ImageBind. For details, see the paper: ImageBind: One Embedding Space To Bind Them All.

ImageBind learns a joint embedding across six different modalities - images, text, audio, depth, thermal, and IMU data. It enables novel emergent applications ‘out-of-the-box’ including cross-modal retrieval, composing modalities with arithmetic, cross-modal detection and generation.

ImageBind

ImageBind model

Emergent zero-shot classification performance.

| Model | IN1k | K400 | NYU-D | ESC | LLVIP | Ego4D | download | | -------------- | ---- | ---- | ----- | ---- | ----- | ----- | ------------------------------------------------------------ | | imagebind_huge | 77.7 | 50.0 | 54.0 | 66.9 | 63.4 | 25.0 | checkpoint |

Usage

Install pytorch 1.13+ and other 3rd party dependencies.

``` conda create --name imagebind python=3.8 -y conda activate imagebind

pip install -r requirements.txt

```

Extract and compare features across modalities (e.g. Image, Text and Audio).

``` import data import torch from models import imagebind_model from models.imagebind_model import ModalityType

text_list=["A dog.", "A car", "A bird"] image_paths=[".assets/dog_image.jpg", ".assets/car_image.jpg", ".assets/bird_image.jpg"] audio_paths=[".assets/dog_audio.wav", ".assets/car_audio.wav", ".assets/bird_audio.wav"]

device = "cuda:0" if torch.cuda.is_available() else "cpu"

Instantiate model

model = imagebind_model.imagebind_huge(pretrained=True) model.eval() model.to(device)

Load data

inputs = { ModalityType.TEXT: data.load_and_transform_text(text_list, device), ModalityType.VISION: data.load_and_transform_vision_data(image_paths, device), ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, device), }

with torch.no_grad(): embeddings = model(inputs)

print( "Vision x Text: ", torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.TEXT].T, dim=-1), ) print( "Audio x Text: ", torch.softmax(embeddings[ModalityType.AUDIO] @ embeddings[ModalityType.TEXT].T, dim=-1), ) print( "Vision x Audio: ", torch.softmax(embeddings[ModalityType.VISION] @ embeddings[ModalityType.AUDIO].T, dim=-1), )

Expected output:

#

Vision x Text:

tensor([[9.9761e-01, 2.3694e-03, 1.8612e-05],

[3.3836e-05, 9.9994e-01, 2.4118e-05],

[4.7997e-05, 1.3496e-02, 9.8646e-01]])

#

Audio x Text:

tensor([[1., 0., 0.],

[0., 1., 0.],

[0., 0., 1.]])

#

Vision x Audio:

tensor([[0.8070, 0.1088, 0.0842],

[0.1036, 0.7884, 0.1079],

[0.0018, 0.0022, 0.9960]])

```

Model card

Please see the model card for details.

License

ImageBind code and model weights are released under the CC-BY-NC 4.0 license. See LICENSE for additional details.

Contributing

See contributing and the code of conduct.

Citing ImageBind

If you find this repository useful, please consider giving a star ⭐ and citation

Visit Official Website

https://imagebind.metademolab.com/

Community Posts
no data
Nothing to display