In yesterday's live stream—Live 64—we continued learning about how to vectorize sketches with machine learning (using the Virtual Sketching framework by Simo-Serra et al).
There was little action as we spent time reading the paper and trying to understand how the model works. In short, the framework consists of four distinct steps—aligned cropping, stroke generation, differentiable rendering, and differentiable pasting—that I summarized from the Line Drawing Generation Framework section of the publication.1
I forked the source code at nonoesp/virtual_sketching and plan to make small modifications to improve certain utilities and understand the underlying logic.
My goal is to (a) obtain coordinates of the vector output, render it myself, and create input/output workflows, (b) understand the model's architecture and how it learns during training, and (c) understand differential rendering and pasting.
It would definitely be more productive for me to study some of these topics offline and share my findings live. But I haven't put the time to do that over the last weeks and might have to continue working on this only during live streams.
If this is something that interests you, please let me know on Twitter or, even better, on the Discord community.
Thanks for watching.
See you next week!
Mo, Haoran and Simo-Serra, Edgar and Gao, Chengying and Zou, Changqing and Wang, Ruomei. General Virtual Sketching Framework for Vector Line Art, 2021. ↩