计算机科学 ›› 2024, Vol. 51 ›› Issue (7): 236-243.doi: 10.11896/jsjkx.230400128
蔡汶良, 黄俊
CAI Wenliang, HUANG Jun
摘要: 针对现有车道线检测方法存在的检测速度慢、检测精度低的问题,将车道线检测视为分类问题,提出了基于RepVGG网络的实时车道线检测方法。在RepVGG网络中融合不同层级特征图,减少空间定位信息的损失,提高车道线的定位精度。采用曲线建模的后处理方法,从整体和局部两个角度修正车道线预测结果。挖掘车道线定位中的分布信息,提出了基于分布指导的车道线存在预测分支,直接从车道线定位分布中学习车道线的存在特征,在略微提升推理速度的同时进一步提升检测精度。在TuSimple和CULane数据集上的实验表明,该模型在检测速度和精度上取得了良好的平衡。在CULane数据集上,所提方法的推理速度为目前同类方法中检测速度最快的UFLDv2算法的1.13倍,同时F1分数从74.7%提高到77.1%,达到了实时检测任务的需求。
中图分类号:
[1]JIANG Y,GAO F,XU G.Computer vision-based multiple-lane detection on straight road and in a curve[C]//2010 Interna-tional Conference on Image Analysis and Signal Processing.IEEE,2010:114-117. [2]HU H D,LIU G R,WNG L L,et al.A Lane Detection Algorithm Based on Vanishing Point and Color Filter[J].Journal of Chongqing Technology and Business University(Natural Science Edition),2023,40(5):25-33. [3]JUNG H,MIN J,KIM J.An efficient lane detection algorithm for lane departure detection[C]//2013 IEEE Intelligent Vehicles Symposium(IV).IEEE,2013:976-981. [4]WANG Q,HAN T,QIN Z,et al.Multitask attention network for lane detection and fitting[J].IEEE Transactions on Neural Networks and Learning Systems,2020,33(3):1066-1078. [5]QIU Q,GAO H,HUA W,et al.PriorLane:A Prior Knowledge Enhanced Lane Detection Approach Based on Transformer[J].arXiv:2209.06994,2022. [6]PAN H,HONG Y,SUN W,et al.Deep dual-resolution net-works for real-time and accurate semantic segmentation of traffic scenes[J].IEEE Transactions on Intelligent Transportation Systems,2022,24(3):3448-3460. [7]ZHENG T,HUANG Y,LIU Y,et al.Clrnet:Cross layer refinement network for lane detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:898-907. [8]CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Deeplab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected crfs [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):834-848. [9]DONG Y,PATIL S,VAN AREM B,et al.A hybrid spatial-temporal deep learning architecture for lane detection [J].Compu-ter-Aided Civil and Infrastructure Engineering,2022,38(1):67-86. [10]HOU Y.Agnostic lane detection [J].arXiv:1905.03704,2019. [11]NEVEN D,DE BRABANDERE B,GEORGOULIS S,et al.Towards End-to-End Lane Detection:An Instance Segmentation Approach[C]//2018 IEEE Intelligent Vehicles Symposium,IV.Changshu:Institute of Electrical and Electronics Engineers Inc.,2018:286-291. [12]YANG J,ZHANG L,LU H.Lane Detection with Versatile AtrousFormer and Local Semantic Guidance[J].Pattern Recognition,2023,133:109053. [13]HOU Y,MA Z,LIU C,et al.Learning lightweight lane detection CNNS by self attention distillation[C]//17th IEEE/CVF International Conference on Computer Vision.Seoul:Institute of Electrical and Electronics Engineers Inc.,2019:1013-1021. [14]TIAN S,ZHANG J F,ZHANG Y T,et al.Lane Detection Algorithm Based on Dilated Convolution Pyramid Network [J].Journal of Southwest Jiaotong University,2020,55(2):386-392,416. [15]QIN Z,WANG H,LI X.Ultra fast structure-aware deep lane detection[C]//European Conference on Computer Vision.Glasgow:Springer,2020:276-291. [16]QIN Z,ZHANG P,LI X.Ultra Fast Deep Lane Detection With Hybrid Anchor Driven Ordinal Classification [J].arXiv:2206.07389,2022. [17]YOO S,SEOK L H,MYEONG H,et al.End-to-end lane marker detection via row-wise classification[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops.Virtual:IEEE Computer Society,2020:4335-4343. [18]FENG Z,GUO S,TAN X,et al.Rethinking Efficient Lane Detection via Curve Modeling[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).New Orleans:IEEE,2022:17041-17049. [19]HAN J,DENG X,CAI X,et al.Laneformer:Object-aware Row-Column Transformers for Lane Detection[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2022:799-807. [20]TABELINI L,BERRIEL R,PAIXãO T M,et al.PolyLaneNet:Lane Estimation via Deep Polynomial Regression[C]//2020 25th International Conference on Pattern Recognition(ICPR).Milan,Italy:IEEE,2021:6150-6156. [21]DING X,ZHANG X,MA N,et al.RepVgg:Making VGG-style ConvNets Great Again[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Virtual:IEEE Computer Society,2021:13728-13737. [22]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society,2016:770-778. [23]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//3rd International Conference on Learning Representations(ICLR).San Diego,2015. [24]HU J,SHEN L,ALBANIE S,et al.Squeeze-and-Excitation Networks [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,42(8):2011-2023. [25]PAN X,SHI J,LUO P,et al.Spatial as deep:Spatial CNN for traffic scene understanding[C]//32nd AAAI Conference on Artificial Intelligence.New Orleans:AAAI press,2018:7276-7283. [26]GHAFOORIAN M,NUGTEREN C,BAKA N,et al.EL-GAN:Embedding Loss Driven Generative Adversarial Networks for Lane Detection[C]//15th European Conference on Computer Vision.Munich:Springer,2018:256-272. [27]PHILION J.FastDraw:Addressing the long tail of lane detection by adapting a sequential prediction network[C]//32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:11574-11583. [28]TABELINI L,BERRIEL R,PAIXAO T M,et al.Keep your Eyes on the Lane:Real-time Attention-guided Lane Detection[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Virtual:IEEE Computer Society,2021:294-302. [29]ZHENG T,FANG H,ZHANG Y,et al.RESA:Recurrent Feature-Shift Aggregator for Lane Detection[C]//35th AAAI Conference on Artificial Intelligence.Virtual:Association for the Advancement of Artificial Intelligence,2021:3547-3554. [30]SU J,CHEN C,ZHANG K,et al.Structure Guided Lane Detection[C]//30th International Joint Conference on Artificial Intelligence.2021:997-1003. [31]QU Z,JIN H,ZHOU Y,et al.Focus on local:Detecting lanemarker from bottom up via key point[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Virtual:IEEE Computer Society,2021:14122-14130. |
|