We present LatentSync, an end-to-end lip sync framework based on audio conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip sync methods based on pixel space diffusion or two-stage generation. Our framework can leverage the powerful capabilities of Stable Diffusion to directly model complex audio-visual correlations.
Original video | Lip-synced video |
demo1_video.mp4 |
demo1_output.mp4 |
demo2_video.mp4 |
demo2_output.mp4 |
demo3_video.mp4 |
demo3_output.mp4 |
demo4_video.mp4 |
demo4_output.mp4 |
demo5_video.mp4 |
demo5_output.mp4 |
(Photorealistic videos are filmed by contracted models, and anime videos are from VASA-1 and EMO)
- Inference code and checkpoints
- Data processing pipeline
- Training code
Install the required packages and download the checkpoints via:
source setup_env.sh
If the download is successful, the checkpoints should appear as follows:
./checkpoints/
|-- latentsync_unet.pt
|-- latentsync_syncnet.pt
|-- whisper
| `-- tiny.pt
|-- auxiliary
| |-- 2DFAN4-cd938726ad.zip
| |-- i3d_torchscript.pt
| |-- koniq_pretrained.pkl
| |-- s3fd-619a316812.pth
| |-- sfd_face.pth
| |-- syncnet_v2.model
| |-- vgg16-397923af.pth
| `-- vit_g_hybrid_pt_1200e_ssv2_ft.pth
These already include all the checkpoints required for latentsync training and inference. If you just want to try inference, you only need to download latentsync_unet.pt
and tiny.pt
from our HuggingFace repo
Run the script for inference, which requires about 6.5 GB GPU memory.
./inference.sh
The complete data processing pipeline includes the following steps:
- Remove the broken video files.
- Resample the video FPS to 25, and resample the audio to 16000 Hz.
- Scene detect via PySceneDetect.
- Split each video into 5-10 second segments.
- Remove videos where the face is smaller than 256
$\times$ 256, as well as videos with more than one face. - Affine transform the faces according to the landmarks detected by face-alignment, then resize to 256
$\times$ 256. - Remove videos with sync confidence score lower than 3, and adjust the audio-visual offset to 0.
- Calculate hyperIQA score, and remove videos with scores lower than 40.
Run the script to execute the data processing pipeline:
./data_processing_pipeline.sh
You can change the parameter input_dir
in the script to specify the data directory to be processed. The processed data will be saved in the same directory. Each step will generate a new directory to prevent the need to redo the entire pipeline in case the process is interrupted by an unexpected error.
Before training, you must process the data as described above and download all the checkpoints. We released a pretrained SyncNet with 94% accuracy on the VoxCeleb2 dataset for the supervision of U-Net training. Note that this SyncNet is trained on affine transformed videos, so when using or evaluating this SyncNet, you need to perform affine transformation on the video first (the code of affine transformation is included in the data processing pipeline).
If all the preparations are complete, you can train the U-Net with the following script:
./train_unet.sh
You should change the parameters in U-Net config file to specify the data directory, checkpoint save path, and other training hyperparameters.
In case you want to train SyncNet on your own datasets, you can run the following script. The data processing pipeline for SyncNet is the same as U-Net.
./train_syncnet.sh
After validations_steps
training, the loss charts will be saved in train_output_dir
. They contain both the training and validation loss.