Diverse Image Generation via Self-Conditioned GANs (website under construction)

Steven Liu1, Tongzhou Wang1, David Bau1, Jun-Yan Zhu2, Antonio Torralba1
1MIT CSAIL, 2Adobe Research
CVPR 2020

Despite the remarkable progress in Generative Adversarial Networks (GANs), unsupervised models fail to generalize to diverse datasets, such as ImageNet or Places365. To tackle such datasets, we rely on class-conditional GANs, which require class labels to train. These labels are often not available or are expensive to obtain.

We propose to increase unsupervised GAN quality by inferring class labels in a fully unsupervised manner. By periodically clustering already present discriminator features, we improve generation quality on large-scale datasets such as ImageNet and Places. Besides increasing generation quality, we also automatically infer semantically meaningful clusters.

Visualizing Inferred Clusters

Our method is able to automatically infer semantically meaningful clusters on complex datasets such as ImageNet and Places.

We can visualize some of the inferred clusters and the generated samples conditioned on each cluster below. We can visualize more examples of inferred classes on ImageNet and on Places (TODO).

We can visualize other inferred clusters through these webpages.

Visualizing Sample Diversity

Often, it is difficult to visualize sample diversity by directly looking at samples from GANs trained on already diverse datasets. Instead, we visualize diversity by showing for each true class, the images that a classifier has highest confidence in.

Here is a visualization of diversity for the "pot pie" ImageNet category. Visualizations for other ImageNet classes are shown here (TODO).

Here is a visualization of diversity for the "embassy" Places365 category. Visualizations for other Places365 classes are shown here (TODO).

Image Reconstructions

By looking at what is left out from the reconstructions, we can visualize what the GAN is unable to generate. Below, we visualize image reconstructions for an unconditional GAN and for a self-conditioned GAN.

Why do Class-Conditional GANs work so well?

We observe that class-conditional models exhibit strong unit sharing. Some units are able to express different concepts when conditioned on different classes, such as a person for one condition and a tree for a different condition. Furthermore, some units are reused across different classes, such as water units and sky units.

Here, we show examples of self-conditioned GAN units corresponding to different concepts for different conditions, and units which correspond to the same concept across conditions.

How to cite

Bibtex

@inproceedings{liu2020selfconditioned,
 title={Diverse Image Generation via Self-Conditioned GANs},
 author={Liu, Steven and Wang, Tongzhou and Bau, David and Zhu, Jun-Yan and Torralba, Antonio},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
 year={2020}
}

Acknowledgements

We thank Phillip Isola, Bryan Russell, Richard Zhang, and our anonymous reviewers for their helpful comments. We are grateful for the support from the DARPA XAI program FA8750-18-C000, NSF 1524817, NSF BIGDATA 1447476, and a GPU donation from NVIDIA. This website template is borrowed from ganseeing.csail.mit.edu.