Config the path of dataset in validation.sh. It would evaluate the models on test dataset and call selection.py to reproduce our manual selection process.
Run
bash validation.sh
The result will be ready at ./result.zip
Dataset Preprocessing
We made no modifications to the images provided before they’re fed into our network, but we manually constructed three subsets of the training set, i.e.
Total. Containing the original 10,000 images.
Selection I. Manually remove some images from Total, 8115 images left.
Selection II. Based on Selection I, removed more images. Contains 7331 images.
This is not a carefully designed schedule. It is a compromise of our remaining time, access to calculation power and temporary thoughts.
Inference Scripts
Config and run
bash ./test.sh
And results will be compressed into a 7zip file.
Acknowledgement
The implementation of this repository is based on TSIT ([Code Base][Paper]). You may somehow view it as an incomplete “style-transfer” from its original pytorch implementation to jittor framework.
Our spectral normalization uses the implementation of [PytorchAndJittor].
We implement our model with Jittor. Jittor is a deep learning framework based on dynamic compilation (Just-in-time), using innovative meta-operators and unified computational graphs internally. Meta-operators are as easy to use as Numpy, and beyond Numpy can achieve more complex and more efficient operations. The unified computing graph combines the advantages of static and dynamic computing graphs, and provides high-performance optimization. Deep learning models developed based on Jittor can be automatically optimized in real time and run on CPU or GPU.
关于
A jittor implementation of TSIT for jittor AI contest.
Jittor Landscape Generation with TSIT
Introduction
This repository provides the implementation of Team GAN! in
We implemented our model based on TSIT network architecture, and have achieved a score of 0.5189 in Track 1, ranking 15 in Board A.
Download our results.
Assignment report see
assets/REPORT.pdf
.Install and Validate
Environments
We train and evaluate our model in the following environments.
The total training time is estimated to be 65 ~ 70 hours and inference time is about several minutes.
Python 3.8.13
Jittor 1.3.4.15
CUDA 11.6
Open MPI 4.0.3
Python 3.7.13
Jittor 1.3.4.9
CUDA 11.6
NO Open MPI
Unittest
test_conv_transpose3d
test_conv3d
due to low precision.
Packages
Testing Pretrained Models
We trained two separate models and manually mixed their result to form our final submission. To reproduce our result, you can
./checkpoints
so that the directory looks like Note that*_net_D.pkl
(Discriminator) is not necessary at evaluation.validation.sh
. It would evaluate the models on test dataset and callselection.py
to reproduce our manual selection process../result.zip
Dataset Preprocessing
We made no modifications to the images provided before they’re fed into our network, but we manually constructed three subsets of the training set, i.e.
Total
. Containing the original 10,000 images.Selection I
. Manually remove some images fromTotal
, 8115 images left.Selection II
. Based onSelection I
, removed more images. Contains 7331 images.Download our preprocessed training sets
Training Scripts
Train on single GPU
Train on multiple GPUs.
About our training process
Our training for model
main
involves 4 phasesTotal
Selection I
Selection II
Selection I
This is not a carefully designed schedule. It is a compromise of our remaining time, access to calculation power and temporary thoughts.
Inference Scripts
Config and run
And results will be compressed into a 7zip file.
Acknowledgement
The implementation of this repository is based on TSIT ([Code Base] [Paper]). You may somehow view it as an incomplete “style-transfer” from its original pytorch implementation to jittor framework.
Our spectral normalization uses the implementation of [PytorchAndJittor].
We implement our model with Jittor. Jittor is a deep learning framework based on dynamic compilation (Just-in-time), using innovative meta-operators and unified computational graphs internally. Meta-operators are as easy to use as Numpy, and beyond Numpy can achieve more complex and more efficient operations. The unified computing graph combines the advantages of static and dynamic computing graphs, and provides high-performance optimization. Deep learning models developed based on Jittor can be automatically optimized in real time and run on CPU or GPU.