SIGGRAPH Asia, ICCV and Pacific Graphics Papers Accepted
Events
Dr Yong-liang Yang, Senior Lecturer and CAMERA Co-Investigator has three papers accepted by SIGGRAPH Asia 2019, ICCV 2019, and Pacific Graphics 2019.
Dr Yang’s research interests include Computer Graphics, Geometric Modelling, Computational Design, Interactive Techniques, Virtual and Augmented Reality, and Applied Machine Learning. To find out more about his recent publications, please follow the links below for papers and project pages.
Adversarial Monte Carlo Denoising with Conditioned Auxiliary Feature Modulation
Bing Xu, Junfei Zhang, Rui Wang, Kun Xu, Yong-Liang Yang, Chuan Li, Rui Tang
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2019)
[paper] [project page]
Abstract
Denoising Monte Carlo rendering with a very low sample rate remains a major challenge in the photo-realistic rendering research. Many previous works including regression-based and learning-based methods, have been explored to achieve better rendering quality with less computational cost. However, most of these methods rely on handcrafted optimization objectives, which lead to artifacts such as blurs and unfaithful details. In this paper, we present an adversarial approach for denoising Monte Carlo rendering. Our key insight is that generative adversarial networks can help denoiser networks to produce more realistic high-frequency details and global illumination by learning the distribution from a set of high-quality Monte Carlo path tracing images. We also adapt a novel feature modulation method to utilize auxiliary features better, including normal, albedo and depth. Compared to previous state-of-the-art methods, our approach produces a better reconstruction of the Monte Carlo integral from a few samples, performs more robustly at different sample rates, and takes only a second for megapixel images.
Anisotropic Surface Remeshing without Obtuse Angles
Qun-Ce Xu, Dong-Ming Yan, Wenbin Li, Yong-Liang Yang
Computer Graphics Forum (Proceedings of Pacific Graphics 2019)
[paper] [project page]
Abstract
We present a novel anisotropic surface remeshing method that can efficiently eliminate obtuse angles. Unlike previous work that can only suppress obtuse angles with expensive resampling and Lloyd-type iterations, our method relies on a simple yet efficient connectivity and geometry refinement, which can not only remove all the obtuse angles, but also preserves the original mesh connectivity as much as possible. Our method can be directly used as a post-processing step for anisotropic meshes generated from existing algorithms to improve mesh quality. We evaluate our method by testing on a variety of meshes with different geometry and topology, and comparing with representative prior work. The results demonstrate the effectiveness and efficiency of our approach.
HoloGAN: Unsupervised Learning of 3D Representations from Natural Images
Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang
ICCV 2019
[paper][supp][arXiv][project page]
Abstract
We propose a novel generative adversarial network (GAN) for the task of unsupervised learning of 3D representations from natural images. Most generative models rely on 2D kernels to generate images and make few assumptions about the 3D world. These models therefore tend to create blurry images or artefacts in tasks that require a strong 3D understanding, such as novel-view synthesis. HoloGAN instead learns a 3D representation of the world, and to render this representation in a realistic manner. Unlike other GANs, HoloGAN provides explicit control over the pose of generated objects through rigid-body transformations of the learnt 3D features. Our experiments show that using explicit 3D features enables HoloGAN to disentangle 3D pose and identity, which is further decomposed into shape and appearance, while still being able to generate images with similar or higher visual quality than other generative models. HoloGAN can be trained end-to-end from unlabelled 2D images only. In particular, we do not require pose labels, 3D shapes, or multiple views of the same objects. This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.