info
Thank you for visiting my theme! Replace this with your message to visitors.

You are here: Home / research

Comuptational Photography and Application

Computational photography or computational imaging refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film based photography, or reduce the cost or reduce the size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced of depth-of-field, and selective de-focusing (or "post focus"). Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

All 2020 2019 2018 2017 2016 More...

Depth estimation and optimization from multiple depth cues based on camera array [Finished]

PI: Qing Wang | Code: 61272287 | Support: NSFC | Start/End: 2013-01-01/2016-12-31

Accurate depth estimation is one of key problems in 3D scene reconstruction and visualization, which can be expanded to many computer vision applications, such as object tracking, scene segmentation, visual navigation and so on. At present, single depth cue based depth estimation is still an open problem in computer vision. To utilize multiple depth cues in a dense of camera array, we explore on the three key aspects, including acquisition of multiple depth cues, multiple cues based depth estimation and accurate depth map optimization. In order to extract multiple depth cues from light field, we have built a camera array system to capture target scene. The elemental cameras are accurately calibrated, which can be used for synthetic imaging with a synthetic 2D or 3D focal plane. Then we have extracted scene depth information related structure cues, parallax cue and focus cue of target scene from light field EPI, refocusing image and confocal image respectively. After that, we have proposed that there is complementary relationship between parallax cue and focus cue. For depth estimation, we have introduced a novel ground control points (GCPs) based method to obtain dense disparity map. Moreover, by focusing on the parallax cue, we have proposed a segmentation-tree based cost aggregation to produce more robust disparity estimation for each pixel. Besides, we have also proposed a multi-occlusion model in light field, which can be performed to deal with the occlusion area in depth estimation. Finally, based on the light field sampling analysis, we have proposed a multi-depth cues fusion algorithm to estimate depth under the framework of Markov Random Field, which can take both advantages of shape from stereo and shape from focus. Our algorithm is more accurate than single cue based depth estimation algorithms. To optimize the result of depth estimation, we have first proposed a method to remove outliers based on penalized linear regression, which can eliminate the distraction of outliers. As for the estimation of occluded area, we have proposed a global optimization based on the surface camera and stereo matching method, which can achieve sub-pixel accuracy for depth estimation. To address the issue of aliasing artifacts in the light field imaging, we have proposed an angular aliasing detection algorithm by shifting the aperture model randomly, and then we introduce a multi-scale anti-aliasing rendering algorithm to stitch different non-aliasing image parts together. Our algorithm can significantly improve the confocal imaging quality. We also carry out several researches on other depth estimation related techniques and applications, such as multi-view video synchronization, light field super-pixel segmentation, local feature extraction of light field and applications in live face detection and so on.

After four year work, we have submitted 4 patent applications in China and published 20 papers, including 2 papers on TIP and TCSVT journals and 2 papers on CCF Rank A conferences ICCV and CVPR. We have also cultivated 2 NSFC young scholar funds, 5 Ph.D. and 10 master students under the support of this NSFC fund.

 

Key words: Depth estimation; Camera array; Multiple depth cues; Global optimization; Depth evaluation model


A Multi-scale Anti-aliasing Rendering Algorithm for the Light Field Imaging
Zhaolin Xiao, Qing Wang, Guoqing Zhou, Heng Yang
计算机辅助设计与图形学学报 (Journal of CAD and CG), 26(7):1126-1134
Paper | Code | BibTeX | Github

"The man can be destroyed but not defeated。" - Ernest Miller Hemingway