MNIST Image Generation based on Jittor CGAN
This is a jittor implementation of a conditional generative adversarial network (CGAN) for generating MNIST digits conditioned on their labels. A CGAN consists of two neural networks: a generator and a discriminator. The generator tries to create realistic images that match the given labels, while the discriminator tries to distinguish between real and fake images.
Requirements
- jittor: a high-performance deep learning framework based on JIT compiling and meta-operators.
- PIL: a Python imaging library for image processing.
- numpy: a Python library for scientific computing.
You can install the required packages using the following command:
pip install -r requirements.txt
Usage
You can run the script using the following command:
python CGAN.py
You can also specify some optional arguments, such as:
--n_epochs
: number of epochs of training (default: 50)
--batch_size
: size of the batches (default: 64)
--lr
: learning rate for Adam optimizer (default: 0.0002)
--b1
: beta1 parameter for Adam optimizer (default: 0.5)
--b2
: beta2 parameter for Adam optimizer (default: 0.999)
--n_cpu
: number of cpu threads to use during batch generation (default: 8)
--latent_dim
: dimensionality of the latent space (default: 100)
--n_classes
: number of classes for dataset (default: 10)
--img_size
: size of each image dimension (default: 32)
--channels
: number of image channels (default: 1)
--sample_interval
: interval between image sampling (default: 1000)
--train
: whether to train the model or not (default: False)
--number
: the number string to generate (default: “13620261360915”)
For example, you can train the model for 100 epochs with a batch size of 128 using the following command:
python CGAN.py --n_epochs 100 --batch_size 128 --train
Tests
You can test the model by generating some images conditioned on a given number sequence. The number sequence should be a string of digits from 0 to 9. For example, you can generate images conditioned on the number sequence “13620261360915” using the following command:
python CGAN.py --number "13620261360915"
The generated images will be saved as “result.png” in the same directory as the script.
References
This script is based on the following paper and code:
MNIST Image Generation based on Jittor CGAN
This is a jittor implementation of a conditional generative adversarial network (CGAN) for generating MNIST digits conditioned on their labels. A CGAN consists of two neural networks: a generator and a discriminator. The generator tries to create realistic images that match the given labels, while the discriminator tries to distinguish between real and fake images.
Requirements
You can install the required packages using the following command:
Usage
You can run the script using the following command:
You can also specify some optional arguments, such as:
--n_epochs
: number of epochs of training (default: 50)--batch_size
: size of the batches (default: 64)--lr
: learning rate for Adam optimizer (default: 0.0002)--b1
: beta1 parameter for Adam optimizer (default: 0.5)--b2
: beta2 parameter for Adam optimizer (default: 0.999)--n_cpu
: number of cpu threads to use during batch generation (default: 8)--latent_dim
: dimensionality of the latent space (default: 100)--n_classes
: number of classes for dataset (default: 10)--img_size
: size of each image dimension (default: 32)--channels
: number of image channels (default: 1)--sample_interval
: interval between image sampling (default: 1000)--train
: whether to train the model or not (default: False)--number
: the number string to generate (default: “13620261360915”)For example, you can train the model for 100 epochs with a batch size of 128 using the following command:
Tests
You can test the model by generating some images conditioned on a given number sequence. The number sequence should be a string of digits from 0 to 9. For example, you can generate images conditioned on the number sequence “13620261360915” using the following command:
The generated images will be saved as “result.png” in the same directory as the script.
References
This script is based on the following paper and code: