Computer Science ›› 2023, Vol. 50 ›› Issue (11A): 221100066-7.doi: 10.11896/jsjkx.221100066
• Image Processing & Multimedia Technology • Previous Articles Next Articles
LIU Xudong1, YU Ping2
CLC Number:
[1]ZHANG N.Research on image matching algorithm in indoorvisual location [D].Shenyang:Shenyang University of Techno-logy,2020. [2]MCMANUS C,CHURCHILL W,MADDERN W,et al.Shady dealings:Robust,long-term visual localisation using illumination invariance[C]//2014 IEEE International Conference on Robo-tics and Automation(ICRA).IEEE,2014:901-906. [3]LIAO C J,HOU Y K,XIN L.Research on the Operation andService Mechanism of China’s High Resolution Remote Sensing Application Satellite [J].Satellite Applications,2014(2):57-61. [4]MIDDELBERG S,SATTLER T,UNTZELMANN O,et al.Scalable 6-dof localization on mobile devices[C]//European Conference on Computer Vision.Cham:Springer,2014:268-283. [5]BANSAL M,SAWHNEY H S,CHENG H,et al.Geo-localization of street views with aerial image databases[C]//Procee-dings of the 19th ACM International Conference on Multimedia.2011:1125-1128. [6]LI S.Indoor positioning system based on position feature image detection [D].Wuhan:Huazhong University of Science and Technology,2019. [7]WORKMAN S,JACOBS N.On the location dependence of convolutional neural network features[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops.2015:70-78. [8]VO N N,HAYS J.Localizing and orienting street views usingoverhead imagery[C]//European Conference on Computer Vision.Cham:Springer,2016:494-509. [9]HU S,FENG M,NGUYEN R M H,et al.CVM-net:Cross-view matching network for image-based ground-to-aerial geo-localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:7258-7267. [10]CAI S,GUO Y,KHAN S,et al.Ground-to-aerial image geo-localization with a hard exemplar reweighting triplet loss[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:8391-8400. [11]SUN B,CHEN C,ZHU Y,et al.GEOCAPSNET:Ground to aerial view image geo-localization using capsule network[C]//2019 IEEE International Conference on Multimedia and Expo(ICME).IEEE,2019:742-747. [12]REGMI K,SHAH M.Bridging the domain gap for ground-to-aerial image matching[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2019:470-479. [13]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[J].Communications of the ACM,2017,60(6):84-90. [14]ARANDJELOVIC R,GRONAT P,TORII A,et al.NetVLAD:CNN architecture for weakly supervised place recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:5297-5307. [15]HORNIK K,STINCHCOMBE M,WHITE H.Multilayer feedforward networks are universal approximators[J].Neural Networks,1989,2(5):359-366. [16]WORKMAN S,SOUVENIR R,JACOBS N.Wide-area imagegeolocalization with aerial reference imagery[C]//Proceedings of the IEEE International Conference on Computer Vision.2015:3961-3969. [17]GE Y,WANG H,ZHU F,et al.Self-supervising fine-grained region similarities for large-scale image localization[C]//European Conference on Computer Vision.Cham:Springer,2020:369-386. [18]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778. [19]LIU L,LI H.Lending orientation to neural networks for cross-view geo-localization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2019:5624-5633. [20]ZHENG Z,WEI Y,YANG Y.University-1652:A multi-viewmulti-source benchmark for drone-based geo-localization[C]//Proceedings of the 28th ACM International Conference on Multimedia.2020:1395-1403. [21]SHI Y,YU X,LIU L,et al.Optimal feature transport for cross-view image geo-localization[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020,34(7):11990-11997. |
[1] | BAI Xuefei, JIN Zhichao, WANG Wenjian, MA Yanan. Skin Lesion Segmentation Combining Boundary Enhancement and Multi-scale Attention [J]. Computer Science, 2023, 50(4): 96-102. |
|