Unsupervised learning of translucent material appearance using StyleGAN
Chenxi Liao, Bei Xiao, American University, United States; Masataka Sawayama, Inria, France
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Translucent materials show a wide variety of appearances, arising from the complex interaction among generative physical factors (e.g. scattering, geometry, lighting). As the result, it has been difficult to discover generalizable image cues responsible for human perception of translucency across scenes. To mediate this challenge, we train an unsupervised learning model, StyleGAN2-ADA, on unlabeled photographs to generate perceptually realistic and diverse translucent appearances. By analyzing its layer-wise latent space (W+), we discover that the W+ disentangles the physical factors and humans agree with the emerging semantics at different layers. More importantly, we find its middle layers may encode informative image features humans use to perceive translucency across contexts.