@inproceedings{zhu2022unsupervised,
title={Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast},
author={Zhu, Boqing and Xu, Kele and Wang, Changjian and Qin, Zheng and Sun, Tao and Wang, Huaimin and Peng, Yuxing},
booktitle={Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, {IJCAI-22}},
pages={3787--3794},
year={2022},
month={7}
}
In order to speed up the iteration of training, we extract the logmel features of voice data through pre-processing.
>> cd experiments/cmpc
>> python data_transform.py --wav_dir {directory-of-the-wav-file} --logmel_dir {destination-path}
Unsupervised Training
The configurations are written in the CONFIG.yaml file, which can be changed according to your needs,
such as the path information. The unsupervised training process can begin as:
>> python train.py CONFIG.yaml
Evalution on our trained model
Experiments on three evalution protocals: matching, verification and retrieval. The ‘–ckp_path’ could be
the path of downloaded model or your trained model.
Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast
This is the PyTorch implementation for CMPC, as described in our paper:
Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype Contrast
We also provide the pretrained model and testing resources.
Requirments:
Download Pre-trained Models
Data Pre-processing
In order to speed up the iteration of training, we extract the logmel features of voice data through pre-processing.
Unsupervised Training
The configurations are written in the CONFIG.yaml file, which can be changed according to your needs, such as the path information. The unsupervised training process can begin as:
Evalution on our trained model
Experiments on three evalution protocals: matching, verification and retrieval. The ‘–ckp_path’ could be the path of downloaded model or your trained model.
Testing data
Matching, verification and retrieval testing data is released at ./data directory.