计算机科学 ›› 2026, Vol. 53 ›› Issue (4): 384-392.doi: 10.11896/jsjkx.250900032

• 人工智能 • 上一篇    下一篇

融合局部多视角语言特征和全局特征的对话情感四元组抽取

彭菊红1,3, 张正悦1,3, 丁子胥1,3, 范馨予1,3, 胡长玉1,3, 赵明俊2   

  1. 1 湖北大学人工智能学院 武汉 430062
    2 湖北大学计算机学院 武汉 430062
    3 湖北大学智能感知系统与安全教育部重点实验室 武汉 430062
  • 收稿日期:2025-09-03 修回日期:2026-01-09 出版日期:2026-04-15 发布日期:2026-04-08
  • 通讯作者: 赵明俊(12669003@qq.com)
  • 作者简介:(juhongpeng@hubu.edu.cn)
  • 基金资助:
    国家自然科学基金面上项目(62377009)

Multi-view Local Language Feature and Global Feature Fusion for Conversational Aspect-based Sentiment Quadruple Analysis

PENG Juhong1,3, ZHANG Zhengyue1,3, DING Zixu1,3, FAN Xinyu1,3, HU Changyu1,3, ZHAO Mingjun2   

  1. 1 School of Artificial Intelligence, HuBei University, Wuhan 430062, China
    2 School of Computer Science, HuBei University, Wuhan 430062, China
    3 Key Laboratory of Intelligent Perception Systems and Security Ministry of Education(HuBei University), Wuhan 430062, China
  • Received:2025-09-03 Revised:2026-01-09 Published:2026-04-15 Online:2026-04-08
  • About author:PENG Juhong,born in 1978,Ph.D,associate professor.Her main research interests include signal processing and artificial intelligence methods.
    ZHAO Mingjun,born in 1974,Ph.D,lecturer.His main research interests include intelligent learning and deep learning.
  • Supported by:
    General Program of the National Natural Science Foundation of China(62377009).

摘要: 基于对话的方面情感四元组抽取(DiaASQ)是情感分析(ABSA)领域的一个新兴研究方向,其目标旨在从一段对话中识别并提取情感四元组(目标、方面、观点和情感极性)。与传统静态文本的ABSA任务相比,DiaASQ面临以下两大问题:1)对话文本通常较长,目标、方面、观点等情感要素可能分散在多个话语中,难以捕捉长距离依赖关系;2)对话文本结构复杂,通常包含多位发言者和回复关系,信息往往存在跨语句和说话人的情况,回复结构更为复杂。针对上述问题,提出一种融合局部多视角语言特征和全局特征的对话情感四元组抽取(MVLLF-GF)方法。首先,利用多视角语言知识编码器从句法依存关系、语义信息等多个角度对词元进行交互增强,捕捉长距离依赖关系,学习局部特征;其次,使用全局话语编码器从话语层面学习发言者信息和回复关系信息,获取全局特征;再次,使用多粒度融合器对不同层面的特征进行深度整合,增强模型上下文理解能力;最后,使用网格标注的方法实现情感四元组的端到端解码。实验结果表明,在DiaASQ公开中文数据集ZH和英文数据集EN上,与基准模型MVQPN相比,所提模型在Miro F1指标上分别提升了9.13个百分点和6.50个百分点,证明了该方法的有效性。

关键词: 对话情感四元组抽取, 句法依存关系, 注意力机制, 语义信息, 图卷积网络

Abstract: Conversational aspect-based sentiment quadruple analysis(DiaASQ) is an emerging research direction in the field of ABSA(Aspect-Based Sentiment Analysis),which aims to identify and extract sentiment quadruples-namely,target,aspect,opinion,and sentiment polarity-from a given dialogue.Compared with traditional ABSA tasks on static texts,DiaASQ faces two major challenges:1)dialogue texts are often lengthy,with sentiment elements such as targets,aspects,and opinions scattered across multiple utterances,making it difficult to capture long-range dependencies;2)dialogue structures are more complex,typically involving multiple speakers and reply relationships,where information frequently spans sentences and speakers,leading to intricate interaction patterns.To address these challenges,this paper proposes MVLLF-GF,a model that integrates multi-view local language features with global contextual representations for dialogue-based sentiment quadruple extraction.Specifically,a multi-view linguistic knowledge encoder is employed to enhance token-level interactions from multiple perspectives,including syntactic dependency and semantic information,thereby learning rich local features.A global utterance encoder is then introduced to capture global features by modeling speaker identities and reply relationships at theutterance level.Furthermore,a multi-granula-rity fusion module is designed to deeply integrate features across different levels,enhancing the model’s contextual understanding.Finally,an end-to-end grid tagging mechanism is applied to decode sentiment quadruples.Experimental results on the public DiaASQ Chinese dataset(ZH) and English dataset(EN) demonstrate that the proposed method achieves Micro-F1 improvements of 9.13 percentage points and 6.50 percentage points,respectively,over the baseline model MVQPN,verifying its effectiveness.

Key words: Conversational aspect-based sentiment quadruple, Syntactic dependency relation, Attention mechanisms, Semantic information, Graph convolutional network

中图分类号: 

  • TP391
[1]PAPAGEORGIOU H,ANDROUTSOPOULOS I,GALANISD,et al.Semeval-2015 task 12:aspect based sentiment analysis[C]//Proceedings of the 9th International Workshop on Semantic Evaluation.Piscataway,NJ:IEEE,2015:486-495.
[2]KIPF T N,WELLING M.Semi-supervised classification withgraph convolutional networks[J].arXiv:1609.02907,2016.
[3]LI B,FEI H,LI F,et al.Diaasq:A benchmark of conversational aspect-based sentiment quadruple analysis[J].arXiv:2211.05705,2022.
[4]WU Z,YING C,ZHAO F,et al.Grid tagging scheme for aspect-oriented fine-grained opinion extraction[J].arXiv:2010.04640,2020.
[5]SU J,AHMED M,LU Y,et al.Roformer:Enhanced transformer with rotary position embedding[J].Neurocomputing,2024,568:127063.
[6]CAI C,ZHAO Q,XU R,et al.Improving Conversational Aspect-Based Sentiment Quadruple Analysis with Overall Mode-ling[C]//CCF International Conference on Natural Language Processing and Chinese Computing.Cham:Springer,2023:149-161.
[7]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.2019:4171-4186.
[8]LAI Y,FAN S,TONG Z,et al.Conversational aspect-based sentiment quadruple analysis with consecutive multi-view interaction[C]//CCF International Conference on Natural Language Processing and Chinese Computing.Cham:Springer,2023:162-173.
[9]ANGUITA D,GHELARDONI L,GHIO A,et al.The ‘K’ in K-fold Cross Validation[C]//ESANN.2012:441-446.
[10]ZHAO Z,ZHANG L,ZHENG Q,et al.Multi-dimensional feature interaction for Conversational Aspect-Based Quadruple Sentiment Analysis[J].Neural Processing Letters,2025,57(1):9.
[11]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[12]LI B,FEI H,LIAO L,et al.Harnessing holistic discourse features and triadic interaction for sentiment quadruple extraction in dialogues[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2024:18462-18470.
[13]WANG X,JI H,SHI C,et al.Heterogeneous graph attention network[C]//The World Wide Web Conference.2019:2022-2032.
[14]SUN K,ZHANG R,MENSAH S,et al.Aspect-level sentiment analysis via convolution over dependency tree[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural LanguageProcessing and the 9th International Joint Conference on Natural Language Processing.Stroudsburg,PA:Association for Computational Linguistics,2019:5679-5688.
[15]LIU H,XU C,LIANG J.Dependency distance:A new perspective on syntactic patterns in natural languages[J].Physics of Life Reviews,2017,21:171-193.
[16]VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[C]//Proceedings of the 31st International Confe-rence on Neural Information Processing Systems.2017:6000-6010.
[17]CHANG X W,DUAN L G,CHEN J H,et al.A fragment-level extraction method for sentiment triples based on deep fusion of syntactic and semantic features[J].Computer Science,2026,53(2):322-330.
[18]BA J L,KIROS J R,HINTON G E.Layer normalization[J].arXiv:1607.06450,2016.
[19]MANNING C D,SURDEANU M,BAUER J,et al.The Stanford CoreNLP natural language processing toolkit[C]//Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics:System Demonstrations.2014:55-60.
[20]CHEN M,BEUTEL A,COVINGTON P,et al.Top-k off-policy correction for a REINFORCE recommender system[C]//Proceedings of the twelfth ACM International Conference on Web Search and Data Mining.2019:456-464.
[21]BEBIS G,GEORGIOPOULOS M.Feed-forward neural net-works[J].IEEE Potentials,2002,13(4):27-31.
[22]TOLSTIKHIN I O,HOULSBY N,KOLESNIKOV A,et al.Mlp-mixer:An all-mlp architecture for vision[J].Advances in Neural Information Processing Systems,2021,34:24261-24272.
[23]LIU Y,OTT M,GOYAL N,et al.Roberta:A robustly opti-mized bert pretraining approach[J].arXiv:1907.11692,2019.
[24]CUI Y,CHE W,LIU T,et al.Pre-training with whole word masking for chinese bert[J].IEEE/ACM Transactions on Au-dio,Speech,and Language Processing,2021,29:3504-3514.
[25]CAI H,XIA R,YU J.Aspect-category-opinion-sentiment quadruple extraction with implicit aspects and opinions[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.2021:340-350.
[26]EBERTS M,ULGES A.Span-based joint entity and relation extraction with transformer pre-training[M].IOS Press,2020:2006-2013.
[27]ZHANG W,DENG Y,LI X,et al.Aspect sentiment quad prediction as paraphrase generation[J].arXiv:2110.00796,2021.
[28]XU L,CHIA Y K,BING L.Learning span-level interactions for aspect sentiment triplet extraction[J].arXiv:2107.12214,2021.
[29]JIANG H,CHEN X,MIAO D,et al.IFusionQuad:A novelframework for improved aspect-based sentiment quadruple ana-lysis in dialogue contexts with advanced feature integration and contextual CloBlock[J].Expert Systems with Applications,2025,261:125556.
[30]LI Y,ZHANG W,LI B,et al.Dynamic multi-scale context aggregation for conversational aspect-based sentiment quadruple analysis[C]//ICASSP 2024-2024 IEEE International Confe-rence on Acoustics,Speech and Signal Processing(ICASSP).IEEE,2024:11241-11245.
[1] 高泰, 任艳璋, 王会青, 李颖, 王彬.
KGMamba:基于Kolmogorov-Arnold网络优化图卷积网络和Mamba的基因调控网络预测模型
KGMamba:Gene Regulatory Network Prediction Model Based on Kolmogorov-Arnold Network Optimizing Graph Convolutional Network and Mamba
计算机科学, 2026, 53(4): 101-111. https://doi.org/10.11896/jsjkx.250500097
[2] 刘德华, 喻赛萱, 乔金兰, 黄河清, 程文辉.
基于去噪扩散模型增强的换电需求数据生成算法
Denoising Diffusion Model-enhanced Algorithm for Battery Swap Demand Data Generation
计算机科学, 2026, 53(4): 163-172. https://doi.org/10.11896/jsjkx.250600205
[3] 郑诚, 班晴晴.
知识辅助和强化句法驱动的方面级情感分析
Knowledge-assisted and Reinforced Syntax-driven for Aspect-based Sentiment Analysis
计算机科学, 2026, 53(4): 406-414. https://doi.org/10.11896/jsjkx.250600117
[4] 王鑫钰, 高东怀, 宁玉文, 许浩, 齐浩楠.
基于改进YOLO算法的学生行为检测方法
Student Behavior Detection Method Based on Improved YOLO Algorithm
计算机科学, 2026, 53(3): 246-256. https://doi.org/10.11896/jsjkx.241100165
[5] 张伟, 梁敦英, 周婉婷, 程祥.
CA-SFTNet:基于空间特征变换和浓缩注意力机制的皮肤病灶分割模型
CA-SFTNet:Skin Lesion Segmentation Model Based on Spatial Feature Transformation and Concentrated Attention Mechanism
计算机科学, 2026, 53(3): 277-286. https://doi.org/10.11896/jsjkx.250200049
[6] 葛泽庆, 黄圣君.
针对多标记表格数据的半监督学习方法
Semi-supervised Learning Method for Multi-label Tabular Data
计算机科学, 2026, 53(3): 151-157. https://doi.org/10.11896/jsjkx.250600149
[7] 常轩伟, 段利国, 陈嘉昊, 崔娟娟, 李爱萍.
深度融合句法和语义特征的情感三元组片段级抽取方法
Method for Span-level Sentiment Triplet Extraction by Deeply Integrating Syntactic and Semantic
Features
计算机科学, 2026, 53(2): 322-330. https://doi.org/10.11896/jsjkx.250100061
[8] 张静, 潘景豪, 姜文超.
基于背景结构感知的小样本知识图谱补全
Background Structure-aware Few-shot Knowledge Graph Completion
计算机科学, 2026, 53(2): 331-341. https://doi.org/10.11896/jsjkx.250100107
[9] 卓铁农, 英迪, 赵晖.
融合跨模态注意力与角色交互的学生课堂专注度研究
Research on Student Classroom Concentration Integrating Cross-modal Attention and Role
Interaction
计算机科学, 2026, 53(2): 67-77. https://doi.org/10.11896/jsjkx.250300026
[10] 陈海涛, 梁俊威, 陈晨, 王宇帆, 周宇.
基于多模态体育教育数据的图空间融合动作识别方法
Multimodal Physical Education Data Fusion via Graph Alignment for Action Recognition
计算机科学, 2026, 53(2): 89-98. https://doi.org/10.11896/jsjkx.250800007
[11] 徐敬涛, 杨燕, 江永全.
基于时频域注意力的时间序列异常检测模型
Time-Frequency Attention Based Model for Time Series Anomaly Detection
计算机科学, 2026, 53(2): 161-169. https://doi.org/10.11896/jsjkx.241200106
[12] 韩磊, 商浩宇, 钱小燕, 顾妍, 刘青松, 王闯.
双支特征融合的带约束的多损失视频异常检测
Constrained Multi-loss Video Anomaly Detection with Dual-branch Feature Fusion
计算机科学, 2026, 53(2): 236-244. https://doi.org/10.11896/jsjkx.250300103
[13] 郭星星, 肖雁南, 温佩芝, 徐智, 黄文明.
基于注意力机制的音频驱动数字人脸视频生成方法
Attention-based Audio-driven Digital Face Video Generation Method
计算机科学, 2026, 53(2): 245-252. https://doi.org/10.11896/jsjkx.241200067
[14] 季赛, 乔礼维, 孙亚杰.
语义引导的红外与可见光图像混合交叉特征融合方法
Semantic-guided Hybrid Cross-feature Fusion Method for Infrared and Visible Light Images
计算机科学, 2026, 53(2): 253-263. https://doi.org/10.11896/jsjkx.250100123
[15] 吕景刚, 高硕, 李玉芝, 周金.
通道注意力指导全局-局部语义协同的表情识别
Facial Expression Recognition with Channel Attention Guided Global-Local Semantic Cooperation
计算机科学, 2026, 53(1): 195-205. https://doi.org/10.11896/jsjkx.250900051
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!