HomeAI News
Integrate Dynamic Doodles into Videos with VideoDoodles Technology

Integrate Dynamic Doodles into Videos with VideoDoodles Technology

Hayo News
Hayo News
July 21st, 2023

VideoDoodles is an editing technology developed by researchers from Adobe and other institutions that allows dynamic doodles to be integrated into videos.

This technology enables adding dynamic doodles to objects within the video. The doodles automatically adjust as the objects move and are displayed with appropriate perspective and occlusion effects (when objects are obscured by others).

As a result, you don't need professional skills or spend a lot of time to create unique and impressive videos.

Click me to view the demonstration video

The added doodles will adjust accordingly based on the changes in objects and the environment in the video.

For instance, if objects in the video move further away, the doodles will also proportionally shrink to simulate a perspective effect. Similarly, if the doodles are occluded by other objects in the video, they will be hidden to simulate an occlusion effect.

This technology allows the doodles to seamlessly blend into the video, as if they are an integral part of it.

Click me to view the demonstration video

VideoDoodles is primarily based on two core technologies: Plane Canvas and Custom Tracking Algorithm.

Plane Canvas: Users can position plane canvases within the 3D scene reconstructed from the video. These canvases can be seen as virtual "papers" where users can draw. Since the canvases are located in 3D space, they can undergo perspective deformation based on the camera's viewpoint and position.

Custom Tracking Algorithm: This algorithm enables users to anchor the canvas to static or dynamic objects within the scene. This means that if the anchored object moves or rotates, the canvas will also move and rotate accordingly, allowing the drawings to follow the movement of objects in the video.

For detail:https://levtech.jp/media/article/column/detail_272/


Reprinted from @xiaohugggView Original


no dataCoffee time! Feel free to comment