计算机科学 ›› 2017, Vol. 44 ›› Issue (1): 32-36.doi: 10.11896/j.issn.1002-137X.2017.01.006

• 2016第六届中国数据挖掘会议 • 上一篇    下一篇

基于高斯-柯西混合模型的单幅散焦图像深度恢复方法

薛松,王文剑   

  1. 山西大学计算机与信息技术学院 太原030006,山西大学计算机与信息技术学院 太原030006;山西大学计算智能与中文信息处理教育部重点实验室 太原030006
  • 出版日期:2018-11-13 发布日期:2018-11-13
  • 基金资助:
    本文受国家自然科学基金(61273291),山西省回国留学人员科研资助

Depth Estimation from Single Defocused Image Based on Gaussian-Cauchy Mixed Model

XUE Song and WANG Wen-jian   

  • Online:2018-11-13 Published:2018-11-13

摘要: 单幅图像场景深度的获取一直是计算机视觉领域的一个难题。使用高斯分布函数或柯西分布函数近似点扩散函数模型(PSF),再根据图像边缘处散焦模糊量的大小与场景深度之间的关系估算出深度信息,是一种常用的方法。真实世界中图像模糊的缘由千变万化,高斯分布函数以及柯西分布函数并不一定是最佳的近似模型,并且传统的方法对于图像存在阴影、边缘不明显以及深度变化比较细微的区域的深度恢复结果不够准确。为了提取更为精确的深度信息,提出一种利用高斯-柯西混合模型近似PSF的方法;然后对散焦图像进行再模糊处理,得到两幅散焦程度不同的图像;再通过计算两幅散焦图像边缘处梯度的比值估算出图像边缘处的散焦模糊量,从而得到稀疏深度图;最后使用深度扩展法得到场景的全景深度图。通过大量真实图像的测试,说明新方法能够从单幅散焦图像中恢复出完整、可靠的深度信息,并且其结果优于目前常用的两种方法。

关键词: 深度估计,散焦模糊量,高斯-柯西混合模型

Abstract: Recovering the 3D depth of a scene from a single image is a difficult problem in the field of computer vision.Most methods for depth estimation from a single defocused image construct the point spread function by an 2D Gaussian or Cauchy distribution.However,reasons of blurred images in the real world are varied,so a simple Gaussian or Cauchy distribution function may be not the best approximation model.They are often influenced by noise and inaccurate edge location,and then a high quality depth estimation may be difficult to achieve.A Gaussian-Cauchy mixed distribution model was presented in this paper to re-blur the given defocused image,and two different degree blurred images were then obtained.We estimated the sparse depth map generated from the gradients ratio at edge locations by the two blurred images.In so doing,a full depth map can be recovered by matting Laplacian interpolation.Experimental results on some real images demonstrate that the proposed approach is effective and better than the two commonly used approaches.

Key words: Depth estimation,Defocus blur,GC-PSF

[1] AKBARALLY H,KLEEMAN L.3D robot sensing from sonar and vision[C]∥ 1996 IEEE International Conference on Robo-tics and Automation,1996.IEEE,1996:686-691.
[2] PIERACCINI M,LUZI G,MECATTI D,et al.A microwave radar technique for dynamic testing of large structures[J].IEEE Transactions on Microwave Theory Techniques,2003,51(5):1603-1609.
[3] RAJAGOPALAN A N,Chaudhuri S.Space-Variant Approaches to Recovery of Depth from Defocused Images[J].Computer Vision & Image Understanding,1997,68(3):309-329.
[4] SUBBARAO M.Parallel Depth Recovery By Changing Camera Parameters[C]∥ Second International Conference on Computer Vision.IEEE,1988:149-155.
[5] OLIENSIS J,GENC Y.Fast and Accurate Algorithms for Projective Multi-Image Structure from Motion[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2001,23(6):546-559.
[6] NG A Y,SUN M,SAXENA A.Make3D:Learning 3-d scene structure from a single still image[C]∥ PAMI.2008:824-840.
[7] PENTLAND P.A New Sense for Depth of Field[J].IEEETransactions on Pattern Analysis & Machine Intelligence,1987,9(4):523-531.
[8] SUBBARAO M,GURMOORTH N.Depth recovery from blur-red edges[M].IEEE,1988.
[9] NAMBOODIRI V P,CHAUDHURI S.Recovery of relativedepth from a single observation using an uncalibrated (real-aperture) camera[C]∥ 2013 IEEE Conference on Computer Vision and Pattern Recognition.IEEE,2008:1-6.
[10] SU Qing-hua,ZHAO Yan,YANG Kui,et al.Calculation Me-thod of Depth in Single Defocused Image[J].Infrared,2013,34(5):16-22.(in Chinese) 苏庆华,赵剡,杨奎,等.单幅散焦图像深度计算方法[J].红外,2013,34(5):16-22.
[11] ZHUO S,SIM T.Defocus map estimation from a single image.[J].Pattern Recognition,2011,44(9):1852-1858.
[12] MING Ying,JIANG Jing-yu.Depth recovery from a single defocused image using a Cauchy-distribution-based point spread function model[J].Journal of Image and Graphics,2015,20(5):708-714.(in Chinese) 明英,蒋晶珏.使用柯西分布点扩散函数模型的单幅散焦图像深度恢复[J].中国图象图形学报,2015,20(5):708-714.
[13] ENS J,LAWRENCE P.An Investigation of Methods for Determining Depth from Focus[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,1993,15(2):97-108.
[14] SUBBARAO M,Choi T S,Nikzad A.Focusing techniques[M].Optical Engineering,1992.
[15] LAN K T,LAN C H.Notes on the Distinction of Gaussian and Cauchy Mutations[C]∥ Eighth International Conference on Intelligent Systems Design and Applications.IEEE Computer So-ciety,2008:272-277.
[16] MING Ying,JIANG Jing-yu.Cauchy Distribution Based on Statistical Change Detection for Visual Surveillance[J].Journal Of Image And Graphics,2008,3(2):328-334.(in Chinese) 明英,蒋晶珏.视觉监视中基于柯西分布的统计变化检测[J].中国图象图形学报,2008,13(2):328-334.
[17] CANNY J.A Computational Approach to Edge Detection[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,1986,8(6):679-698.

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 雷丽晖,王静. 可能性测度下的LTL模型检测并行化研究[J]. 计算机科学, 2018, 45(4): 71 -75 .
[2] 孙启,金燕,何琨,徐凌轩. 用于求解混合车辆路径问题的混合进化算法[J]. 计算机科学, 2018, 45(4): 76 -82 .
[3] 张佳男,肖鸣宇. 带权混合支配问题的近似算法研究[J]. 计算机科学, 2018, 45(4): 83 -88 .
[4] 伍建辉,黄中祥,李武,吴健辉,彭鑫,张生. 城市道路建设时序决策的鲁棒优化[J]. 计算机科学, 2018, 45(4): 89 -93 .
[5] 史雯隽,武继刚,罗裕春. 针对移动云计算任务迁移的快速高效调度算法[J]. 计算机科学, 2018, 45(4): 94 -99 .
[6] 周燕萍,业巧林. 基于L1-范数距离的最小二乘对支持向量机[J]. 计算机科学, 2018, 45(4): 100 -105 .
[7] 刘博艺,唐湘滟,程杰仁. 基于多生长时期模板匹配的玉米螟识别方法[J]. 计算机科学, 2018, 45(4): 106 -111 .
[8] 耿海军,施新刚,王之梁,尹霞,尹少平. 基于有向无环图的互联网域内节能路由算法[J]. 计算机科学, 2018, 45(4): 112 -116 .
[9] 崔琼,李建华,王宏,南明莉. 基于节点修复的网络化指挥信息系统弹性分析模型[J]. 计算机科学, 2018, 45(4): 117 -121 .
[10] 王振朝,侯欢欢,连蕊. 抑制CMT中乱序程度的路径优化方案[J]. 计算机科学, 2018, 45(4): 122 -125 .