Computer Science ›› 2023, Vol. 50 ›› Issue (11): 114-121.doi: 10.11896/jsjkx.221000058

• Database & Big Data & Data Science • Previous Articles     Next Articles

Road Network Topology-aware Trajectory Representation Learning

CHEN Jiajun, CHEN Wei, ZHAO Lei   

  1. School of Computer Science and Technology,Soochow University,Suzhou,Jiangsu 215006,China
  • Received:2022-10-09 Revised:2023-02-13 Online:2023-11-15 Published:2023-11-06
  • About author:CHEN Jiajun,born in 1998,postgra-duate,is a member of China Computer Federation.His main research interests include trajectory representation lear-ning and deep learning.CHEN Wei,born in 1989,Ph.D,asso-ciate professor,is a member of China Computer Federation.His main research interests include data mining,recommendation systems and know-ledge graph.
  • Supported by:
    National Natural Science Foundation of China(61902270) and Major Program of the Natural Science Foundation of Jiangsu Higher Education Institutions of China(19KJA610002).

Abstract: The approaches developed for task trajectory representation learning(TRL) on road networks can be divided into the following two categories,i.e.,recurrent neural network(RNN) and long short-term memory (LSTM) based sequence models,and the self-attention mechanism based learning models.Despite the significant contributions of these studies,they still suffer from the following problems.(1)The methods designed for road network representation learning in existing work ignore the transition probability between connected road segments and cannot fully capture the topological structure of the given road network.(2)The self-attention mechanism based learning models perform better than sequence models on short and medium trajectories but underperform on long trajectories,as they fail to character the long-term semantic features of trajectories well.Motivated by these findings,this paper proposes a new trajectory representation learning model,namely trajectory representation learning on road networks via masked sequence to sequence network(TRMS).Specifically,the model extends the traditional algorithm DeepWalk with a probability-aware walk to fully capture the topological structure of road networks,and then utilizes the Masked Seq2Seq learning framework and self-attention mechanism in a unified manner to capture the long-term semantic features of tra-jectories.Finally,experiments on the real-world datasets demonstrate that TRMS outperforms the state-of-the-art methods in embedding short,medium,and long trajectories.

Key words: Road-network, Topological structure, Trajectory representation learning, Sequence model, Self-attention mechanism

CLC Number: 

  • TP311
[1]WANG S,BAO Z,XIE Z,et al.Torch:A search engine for tra-jectory data[C]//Proceedings of SIGIR.ACM,2018:535-544.
[2]YOU D.Trajectory Pattern Construction and Next LocationPrediction of Individual Human Mobility with Deep Learning Models [J].Computing in Science and Engineering,2020,14(2):52-65.
[3]ZHAO J,XU J.On prediction of user destination by sub-trajectory understanding:A deep learning based approach[C]//Proceedings of CIKM.ACM,2018:1413-1422.
[4]LI X.Deep representation learning for trajectory similarity computation[C]//Proceedings of ICDE.IEEE,2018:617-628.
[5]ENCISO-RODAS L.Trajectory anomaly detection based on simi-larity analysis[C]//Proceedings of CLEI.IEEE,2021:1-10.
[6]WANG S,BAO Z,CONG G,et al.A survey on trajectory data management,analytics,and learning [J].ACM Computing Surveys,2021,54(2):1-36.
[7]ZHENG Y,LI Q,CHEN Y,et al.Understanding mobility based on GPS data[C]//Proceedings of UbiComp.ACM,2008:312-321.
[8]ZHENG Y,LIU L,WANG L,et al.Learning transportationmode from raw GPS data for geographic applications on the web[C]//Proceedings of WWW.ACM,2008:247-256.
[9]WU H,CHEN Z,SUN W,et al.Modeling trajectories with recurrent neural networks[C]//Proceedings of IJCAI.IEEE,2017:3083-3090.
[10]FU T,LEE W.Trembr:Exploring road networks for trajectory representation learning [J].ACM Transactions on Intelligent Systems and Technology,2020,11(1):1-25.
[11]CHEN Y,CONG G,LIU Y,et al.Robust road network representation learning:When traffic patterns meet traveling semantics[C]//Proceedings of CIKM.ACM,2021:211-220.
[12]WU N,ZHAO W,WANG J,et al.Learning effective road network representation with hierarchical graph neural networks[C]//Proceedings of KDD.ACM,2020:6-14.
[13]JEPSEN T,JENSEN C.Graph convolutional networks for road networks[C]//Proceedings of SIGSPATIAL.ACM,2019:460-463.
[14]LEE W,DU Y.Learning embeddings of intersections on road networks[C]//Proceedings of SIGSPATIAL.ACM,2019:309-318.
[15]PEROZZI B,SKIENA S.Deepwalk:online learning of socialrepresentations[C]//Proceedings of KDD.ACM,2014:701-710.
[16]SONG K,TAN X,QIN T,et al.MASS:masked sequence to sequence pre-training for language generation[C]//Proceedings of ICML.JMLR,2019:5926-5936.
[17]GROVER A.node2vec:Scalable feature learning for networks[C]//Proceedings of SIGKDD.ACM,2016:855-864.
[18]MIKOLOV T.Distributed representations of sentences and do-cuments[C]//Proceedings of ICML.JMLR,2014:1188-1196.
[19]CHANG M,LEE K.BERT:pre-training of deep bidirectionaltransformers for language understanding[C]//Proceedings of NAACL-HLT.ACL,2019:4171-4186.
[20]VASWANI A,SHAZEER N,JONES L.Attention is all youneed[C]//Proceedings of NIPS.ACM,2017:5998-6008.
[21]RANU S,TELANG A.Indexing and matching trajectories under inconsistent sampling rates[C]//Proceedings of ICDE.IEEE,2015:999-1010.
[22]ZHANG C.Map-matching for low-sampling-rate GPS trajectories[C]//Proceedings of SIGSPATIAL.ACM,2009:352-361.
[23]YANG C.Enhanced Map-Matching Algorithm with a HiddenMarkov Model for Mobile Phone Positioning[J].International Journal of Geographical Information Science,2017,6(11):327-343.
[24]VULIC I.Hello,it's GPT-2-how can I help you?towards theuse of pretrained language models for task-oriented dialogue systems[C]//Proceedings of IJCNLP.ACL,2019:15-22.
[1] TENG Sihang, WANG Lie, LI Ya. Non-autoregressive Transformer Chinese Speech Recognition Incorporating Pronunciation- Character Representation Conversion [J]. Computer Science, 2023, 50(8): 111-117.
[2] YAN Mingqiang, YU Pengfei, LI Haiyan, LI Hongsong. Arbitrary Image Style Transfer with Consistent Semantic Style [J]. Computer Science, 2023, 50(7): 129-136.
[3] LI Fan, JIA Dongli, YAO Yumin, TU Jun. Graph Neural Network Few Shot Image Classification Network Based on Residual and Self-attention Mechanism [J]. Computer Science, 2023, 50(6A): 220500104-5.
[4] DOU Zhi, HU Chenguang, LIANG Jingyi, ZHENG Liming, LIU Guoqi. Lightweight Target Detection Algorithm Based on Improved Yolov4-tiny [J]. Computer Science, 2023, 50(6A): 220700006-7.
[5] WANG Xianwang, ZHOU Hao, ZHANG Minghui, ZHU Youwei. Hyperspectral Image Classification Based on Swin Transformer and 3D Residual Multilayer Fusion Network [J]. Computer Science, 2023, 50(5): 155-160.
[6] YANG Bin, LIANG Jing, ZHOU Jiawei, ZHAO Mengci. Study on Interpretable Click-Through Rate Prediction Based on Attention Mechanism [J]. Computer Science, 2023, 50(5): 12-20.
[7] ZHANG Dehui, DONG Anming, YU Jiguo, ZHAO Kai andZHOU You. Speech Enhancement Based on Generative Adversarial Networks with Gated Recurrent Units and Self-attention Mechanisms [J]. Computer Science, 2023, 50(11A): 230200203-9.
[8] ZHANG Jingyuan, WANG Hongxia, HE Peisong. Multitask Transformer-based Network for Image Splicing Manipulation Detection [J]. Computer Science, 2023, 50(1): 114-122.
[9] ZHOU Le-yuan, ZHANG Jian-hua, YUAN Tian-tian, CHEN Sheng-yong. Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion [J]. Computer Science, 2022, 49(9): 155-161.
[10] JIN Fang-yan, WANG Xiu-li. Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM [J]. Computer Science, 2022, 49(7): 179-186.
[11] ZHANG Jia-hao, LIU Feng, QI Jia-yin. Lightweight Micro-expression Recognition Architecture Based on Bottleneck Transformer [J]. Computer Science, 2022, 49(6A): 370-377.
[12] ZHAO Dan-dan, HUANG De-gen, MENG Jia-na, DONG Yu, ZHANG Pan. Chinese Entity Relations Classification Based on BERT-GRU-ATT [J]. Computer Science, 2022, 49(6): 319-325.
[13] WU Mei-lin, HUANG Jia-jin, QIN Jin. Disentangled Sequential Variational Autoencoder for Collaborative Filtering [J]. Computer Science, 2022, 49(12): 163-169.
[14] HU Yan-li, TONG Tan-qian, ZHANG Xiao-yu, PENG Juan. Self-attention-based BGRU and CNN for Sentiment Analysis [J]. Computer Science, 2022, 49(1): 252-258.
[15] WANG Xi, ZHANG Kai, LI Jun-hui, KONG Fang, ZHANG Yi-tian. Generation of Image Caption of Joint Self-attention and Recurrent Neural Network [J]. Computer Science, 2021, 48(4): 157-163.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!