Live 67: Generative Image-to-Image Translation Networks

APRIL 1, 2022

In yesterday's live stream—Live 67—I went over various generative image-to-image translation networks, trying to see their main characteristics and learn from them; how they work, what losses they use, which work their built upon. I used this list of networks, which includes, among others, Pix2Pix, CycleGAN, StyleGAN, CUT, and Informative Drawings.

I also showed OpenAI's CLIP, DALL·E, and Microscope, and NVIDIA's Imaginaire.

Browse through the timestamps below to jump into specific parts of the video.

You can spread the word by liking and sharing this tweet.

If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.

Thanks for watching.

See you next week!