Yiguo Qiao paper accepted at ACM International Conference of Multimedia 2021
News

CAMERA researcher Yiguo Qiao has had her paper ‘Fast, High-Quality Hierarchical Depth-Map Super-Resolution’ accepted at the prestigious ACM International Conference of Multimedia 2021.
Yiguo’s research interests cover the fields of image processing, computer vision, medical data analysis and motion capture, focusing specifically on 3D stereoscopic vision. Working with Prof Darren Cosker and Dr Wenbin Li she is currently engaged in research on motion retargeting and style transfer.
Since the founding of ACM SIGMM in 1993, ACM Multimedia has been the worldwide premier conference and a key world event to display scientific achievements and innovative industrial products in the multimedia field. This year’s conference takes place online and in person from 20th to 24th October.
Abstract:
A fast and high quality HDS method is proposed in this work. Instead of one-step upsampling, a hierarchical image pyramid strategy is adopted, that is, we upsample the low-resolution depth map with a sampling scale of 2 at each and every layer, under the guidance of the pre-downsampled RGB image with a same resolution in the same layer. To obtain more sharp and clear depth edges, we construct a context-adaptive classification based trilateral filter to upgrade the basic HDS method to a C-HDS method. Given the original images with the same quality, both the proposed basic HDS and the upgraded C-HDS outperform the current state-of-the-art approaches, especially in the case of large scale (6X). And the higher quality of original depth maps will result in higher up-sampling quality. In addition, the program is stable, training-free and easy to implement, with run times that exceed other methods to the best of our knowledge based on claimed runtimes.
Beyond super-resolution, the proposed method is also applicable to depth map inpainting. Specifically, blank pixels will be eroded away during the degradation of the depth map, and be filled in during the upsampling process. In addition, due to the strong interpretability, our methods can be simply and widely used for other types of fusion data processing which are similar to RGB-D data , such as RGB-T (thermal) data. Like most methods, our method is insensitive to thin lines, which are easy to lose in low resolution depth maps, and difficult to be retrieved during the upsampling. Especially in the case of complete loss of depth ranges, completion seems to be impossible. Our future work is to find the linear correspondence between RGB images and depth maps by using a depth supervised RGB image pixel-level classification strategy. With this, the low-resolution depth map can be upsampled with some guided filters under the guidance of the classification result.