Computer Science ›› 2018, Vol. 45 ›› Issue (10): 255-260.doi: 10.11896/j.issn.1002-137X.2018.10.047

• Graphics, Image & Pattern Recognition • Previous Articles     Next Articles

Efficient Method of Lane Detection Based on Multi-frame Blending and Windows Searching

CHEN Han-shen1,2, YAO Ming-hai1, CHEN Zhi-hao1, YANG Zhen1   

  1. College of Information Engineering,Zhejiang University of Technology,Hangzhou 310023,China 1
    Zhejiang Institute of Communications,Hangzhou 311112,China 2
  • Received:2017-08-12 Online:2018-11-05 Published:2018-11-05

Abstract: Lane detection is one of the most important research areas in assistance driving and automated driving.Many efficient lane detection algorithms have been proposed recently,but most of them are still hard to achieve a balance between computational efficiency and accuracy.This paper presented a real-time and robust approach for lane detection based on multi-frame blending and windows searching.Firstly,the image is cropped and mapped to create a bird’s-eye view of the road.Then,the RGB image is converted to a binary image based on a threshold of multi-frame blending.In the next step,the starting point of the lane line is calculated by using the pixel density distribution in the near field of view,and the whole lane is extracted by the method of sliding window search.Finally,according to the feature of candidate lane,different lane models are defined and chosen,and the model parameters are obtained by Least Square Estimation(LSE).The proposed algorithm shows good performance when tested on real-world data containing various lane conditions.

Key words: ADAS, Lane detection, Multi-frame blending, Windows searching

CLC Number: 

  • TP391
[1]GURGHIAN A,KODURI T,BAILUR S V,et al.DeepLanes:End-To-End Lane Position Estimation using Deep Neural Networks [C]∥Computer Vision & Pattern Recognition Workshops.2016:38-45.
[2]LI J,MEI X,PROKHOROV D.Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene [J].IEEE Transactions on Neural Networks & Learning Systems,2016,28(3):1-14. [3]CHEN C,SEFF A,KORNHAUSER A,et al.DeepDriving: Learning Affordance for Direct Perception in Autonomous Dri-ving [C]∥IEEE International Conference on Computer Vision.2015:2722-2730.
[4]JEONG P,NEDEVSCHI S.Efficient and robust classification method using combined feature vector for lane detection [J].Circuits and Systems for Video Technology,2005,15(4):528-537.
[5]GOPALAN R,HONG T,SHNEIER M,et al.A learning ap- proach towards detection and tracking of lane markings [J].Intelligent Transportation Systems,2012,13(3):1088-1098.
[6]SATZODA R K,SATHYANARAYANA S,SRIKANTHAN T, et al.Hierarchical additive hough transform for lane detection[J].Embedded Systems Letters,2010,2(2):23-26.
[7]CHEN Q,WANG H.real-time lane detection algorithm based on a hyperbola-pair model[C]∥Intelligent Vehicles Sympo-sium.Tokyo:IEEE,2006:510-515.
[8]WU B F,LIN C T,CHEN Y L.Dynamic calibration and occlusion handling algorithms for lane tracking [J].IEEE Transactions on Industrial Electronics,2009,56(5):1757-1773.
[9]BORKAR A,HAYES M,SMITH M T.Robust lane detection and tracking with ransac and Kalman filter [C]∥IEEE International Conference on Image Processing.2010:3225-3228.
[10]LEE M,JANG C,SUNWOO M.Probabilistic lane detection and lane tracking for autonomous vehicles using a cascade particle filter [J].Proceedings of the Institution of Mechanical Engineers Part D Journal of Automobile Engineering,2015,229(12):1-15.
[11]JU H Y,LEE S W,PARK S K,et al.A Robust Lane Detection Method Based on Vanishing Point Estimation Using the Relevance of Line Segments [J].IEEE Transactions on Intelligent Transportation Systems,2017,PP(99):1-13.
[12]WANG W,SHEN J,SHAO L.Consistent video saliency using local gradient flow optimization and global refinement [J].IEEE Transactions on Image Processing,2015,24(11):4185-4196.
[13]WANG W,SHEN J,PORIKLI F.Saliency-aware geodesic video object segmentation [C]∥Computer Vision & Pattern.IEEE Recognition.2015:3395-3402.
[14]ALY M.Real time detection of lane markers in urban streets[C]∥ IEEE Intelligent Vehicles Symposium.2008:7-12.
[15]BERTOZZI M,BROGGI A.GOLD:A parallel real-time stereo vision system for generic obstacle and lane detection [J].IEEE Transactions on Image Processing,1998,7(1):62-81.
[16]FISCHLER M A,BOLLES R C.Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography[J].Communications of ACM,1981,24(6):381-395.
[17]HAN G F,LI X M,WU X.Research of lane line detection in the vision navigation of unmanned vehicle[J].Fire Control Command Control,2015,40(6):152-158.(in Chinese)
韩广飞,李晓明,武潇.无人驾驶汽车视觉导航中车道线检测的研究[J].火力与指挥控制,2015,40(6):152-158.
[18]LI C,LIU H Z,YUAN J Z,et al.Real-time Lane Detection Algorithm Based on Inter-frame Correlation[J].Computer Science,2017,44(2):317-323.(in Chinese)
李超,刘宏哲,袁家政,等.一种基于帧间关联的实时车道线检测算法 [J].计算机科学,2017,44(2):317-323.
[19]CHEN W W,JIANG Y T,TAN D K.A Fast Lane Marking Recognition Algorithm Based on Edge Projection[J].Automotive Engineering,2017,39(3):357-363.(in Chinese)
陈无畏,蒋玉亭,谈东奎.一种基于边缘点投影的车道线快速识别算法[J].汽车工程,2017,39(3):357-363.
[20]SONG R,CHEN H,XIAO Z G,et al.Lane detection algorithm based on geometric moment sampling[J].Computer Science,2017,44(2):455-467.(in Chinese)
宋锐,陈辉,肖志光,等.基于几何矩采样的车道线检测算法 [J].中国科学:信息科学,2017,47(4):455-467.
[21]HALOI M,JAYAGOPI D B.A robust lane detection and depar- ture warning system[C]∥IEEE Intelligent Vehicles Symposium(IV).2015:126-131.
[1] CHEN Hao-nan, LEI Yin-jie, WANG Hao. Lightweight Lane Detection Model Based on Row-column Decoupled Sampling [J]. Computer Science, 2021, 48(11A): 416-419.
[2] LIU Bin, LIU Hong-zhe. Lane Detection Algorithm Based on Improved Enet Network [J]. Computer Science, 2020, 47(4): 142-149.
[3] LI Zong-xin, QIN Bo, WANG Meng-qian. Real-time Detection and Recognition of Traffic Light Based on Time-Space Model [J]. Computer Science, 2018, 45(6): 314-319.
[4] LI Chao, LIU Hong-zhe, YUAN Jia-zheng and ZHENG Yong-rong. Real-time Lane Detection Algorithm Based on Inter-frame Correlation [J]. Computer Science, 2017, 44(2): 317-323.
[5] TAO Lei,WANG Ping and ZHANG Lei. Multiple Plane Extraction Based on Feature Point Tracking over Video Sequences [J]. Computer Science, 2014, 41(10): 95-100.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!