Generative image-to-image translation networks

MARCH 31, 2022

I've been learning about different generative neural networks and publications, trying to track down the work on top of which informative drawings has been built. A few of the papers I've been looking at include Pix2Pix, CycleGAN, Apple's S+U (Simulated + Unsupervised), NVIDIA's UNIT, CUT (and FastCut, and SingleCUT), which stands for contrastive learning for unpaired image-to-image translation, among others. I think there's potential to train these networks to translate line drawings to look like my hand sketches and viceversa.

  • Pix2Pix. UC Berkeley. 2017. Image-to-Image Translation with Conditional Adversarial Nets.
  • CycleGAN. UC Berkeley. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.
  • S+U. Apple. 2017. Learning from Simulated and Unsupervised Images through Adversarial Training
  • UNIT. NVIDIA. 2017. Unsupervised Image-to-Image Translation Networks
  • StyleGAN2. NVIDIA. March 2020. Analyzing and Improving the Image Quality of StyleGAN.
  • CUT. UC Berkeley & Adobe Research. August 2020. Contrastive Learning for Unpaired Image-to-Image Translation.
  • Informative Drawings. March 2022. Learning to generate line drawings that convey geometry and semantics.