计算机科学 ›› 2024, Vol. 51 ›› Issue (11A): 231000084-7.doi: 10.11896/jsjkx.231000084

• 智能计算 • 上一篇    下一篇

基于医患对话的临床发现识别与阴阳性判别

林浩楠1, 谭红叶1,2, 冯慧敏1   

  1. 1 山西大学计算机与信息技术学院 太原 030006
    2 山西大学计算智能与中文信息处理教育部重点实验室 太原 030006
  • 出版日期:2024-11-16 发布日期:2024-11-13
  • 通讯作者: 谭红叶(hytan_2006@126.com)
  • 作者简介:(202122407030@email.sxu.edu.cn)
  • 基金资助:
    国家自然科学基金面上项目(62076155)

Clinical Findings Recognition and Yin & Yang Status Inference Based on Doctor-Patient Dialogue

LIN Haonan1, TAN Hongye1,2, FENG Huimin1   

  1. 1 School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China
    2 Key Laboratory of Ministry of Education Intelligence and Chinese Information Processing,Shanxi University,Taiyuan 030006,China
  • Online:2024-11-16 Published:2024-11-13
  • About author:LIN Haonan,born in 1998,M.S.,is a member of CCF(No.A1832G).His main research interests include natural language processing,smart healthcare,etc.
    TAN Hongye,born in 1971,Ph.D,professor,is a member of CCF(No.E200022704M).Her main research interests include natural language processing,smart healthcare,smart education etc.
  • Supported by:
    General Program of the National Natural Science Foundation of China(62076155).

摘要: 临床发现识别与阴阳性判别是智慧医疗领域的重要任务之一,旨在识别医患对话等医疗文本中的疾病与症状,并判断其阴阳性状态。该任务的现有研究主要不足有:(1)缺乏对医患对话语义信息、对话结构等特征的建模,导致模型准确率不高;(2)将该任务分为识别与判别两阶段进行,引起错误累积问题。针对以上不足,提出结合对话信息的统一生成模型,通过构建静态-动态融合图对医患对话语义、结构等信息建模,增强模型的对话理解能力;使用生成式语言模型将临床发现识别与阴阳性判别两个子任务统一为一个序列生成任务,以缓解错误累积问题,并且通过识别阴阳性指示词,辅助模型提高阴阳性判别准确率。在CHIP2021评测数据集CHIP-MDCFNPC上的实验结果表明:所提方法在F1指标上达到了71.83%,比基线模型平均提升了2.82%。

关键词: 医患对话, 临床发现识别, 阴阳性判别, 对话建模, 统一生成模型

Abstract: Clinical findings recognition and Yin & Yang status inference are import tasks in the field of intelligent healthcare.The goal is to identify clinical findings such as diseases and symptoms from doctor-patient dialogue record,then determine their Yin & Yang status.The main weakness of existing research is as follow:(1)Lack of modeling of semantic information and dialogue structure in doctor-patient dialogues,leading to low model accuracy.(2)Implementing it as a two-stage process,it will cause error accumulation.This paper proposes a unified generative method that incorporates dialogue information.It achieves this by constructing a static-dynamic fusion graph to model semantic and structural information in doctor-patient dialogues,enhancing the model's understanding of conversations.And utilizes a generative language model to unify clinical findings recognition andYin & Yang status inference into a sequence generation task,mitigating the problem of error accumulation.Additionally,it improves the accuracy of Yin & Yang statue inference by identifying Yin & Yang statue indicator words.Experimental results on the CHIP2021 evaluation dataset CHIP-MDCFNPC show that the proposed method achieves an F1 score of 71.83%,which is 2.82% higher than the baseline model on average.

Key words: Doctor-Patient dialogue, Clinical findings recognition, Yin & Yang statue inference, Dialogue modeling, Unified generative model

中图分类号: 

  • TP391
[1]DU N,CHEN K,KANNAN A,et al.Extracting symptoms and their status from clinical conversations[J].arXiv:1906.02239,2019.
[2]DU N,WANG M,TRAN L,et al.Learning to infer entities,properties and their relations from clinical conversations[J].arXiv:1908.11536,2019.
[3]LIN X,HE X,CHEN Q,et al.Enhancing dialogue symptom di-agnosis with global attention and symptom graph[C]//Procee-dings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP).2019:5033-5042.
[4]FINLEY G,SALLOUM W,SADOUGHI N,et al.From dicta-tions to clinical reports using machine translation[C]//Procee-dings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 3(Industry Papers).2018:121-128.
[5]LI M,XIANG L,KANG X,et al.Medical Term and Status Ge-neration From Chinese Clinical Dialogue With Multi-Granularity Transformer[J].IEEE/ACM Transactions on Audio,Speech,and Language Processing,2021,29:3362-3374.
[6]HUANG Z,XU W,YU K.Bidirectional LSTM-CRF models for sequence tagging[J].arXiv:1508.01991,2015.
[7]ZHANG Y,JIANG Z,ZHANG T,et al.MIE:A medical information extractor towards medical dialogues[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.2020:6460-6469.
[8]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isall you need[J].arXiv:1706.03762,2017.
[9]ELMAN J L.Finding structure in time[J].Cognitive science,1990,14(2):179-211.
[10]HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural computation,1997,9(8):1735-1780.
[11]RAFFEL C,SHAZEER N,ROBERTS A,et al.Exploring the limits of transfer learning with a unified text-to-text transformer[J].J.Mach.Learn.Res.,2020,21(140):1-67.
[12]LEWIS M,LIU Y,GOYAL N,et al.BART:Denoising Se-quence-to-Sequence Pre-training for Natural Language Generation,Translation,and Comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.2020:7871-7880.
[13]RADFORD A,NARASIMHAN K,SALIMANS T,et al.Improving language understanding by generative pre-training[J/OL].https://openai.com/research/language-unsupervised.
[14]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI blog,2019,1(8):9.
[15]BROWN T,MANN B,RYDER N,et al.Language models are few-shot learners[J].Advances in Neural Information Proces-sing Systems,2020,33:1877-1901.
[16]CHEN J,YANG D.Structure-aware abstractive conversationsummarization via discourse and action graphs[J].arXiv:2104.08400,2021.
[17]FENG X,FENG X,QIN B,et al.Dialogue discourse-awaregraph convolutional networks for abstractive meeting summarization[J].arXiv:2012.03502,2020.
[18]CHEN J,YANG D.Multi-view sequence-to-sequence modelswith conversational structure for abstractive dialogue summarization[J].arXiv:2010.01672,2020.
[19]ZHAO L,XU W,GUO J.Improving abstractive dialogue summarization with graph structures and topic words[C]//Procee-dings of the 28th International Conference on Computational Linguistics.2020:437-449.
[20]ZHAO L,ZHENG F,HE K,et al.Todsum:Task-oriented dialogue summarization with state tracking[J].arXiv:2110.12680,2021.
[21]GOO C W,CHEN Y N.Abstractive dialogue summarizationwith sentence-gated modeling optimized by dialogue acts[C]//2018 IEEE Spoken Language Technology Workshop(SLT).IEEE,2018:735-742.
[22]GAO S,CHENG X,LI M,et al.Dialogue Summarization with Static-Dynamic Structure Fusion Graph[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics(Volume 1:Long Papers).2023:13858-13873.
[23]SHI Z,HUANG M.A deep sequential model for discourse parsing on multi-party dialogues[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:7007-7014.
[24]LIU W,ZHOU P,ZHAO Z,et al.K-bert:Enabling languagerepresentation with knowledge graph[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:2901-2908.
[25]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018.
[26]LAFFERTY J,MCCALLUM A,PEREIRA F C N.Conditional random fields:Probabilistic models for segmenting and labeling sequence data[C]//ICMI.2001.
[27]LI X,YAN H,QIU X,et al.FLAT:Chinese NER using flat-lattice transformer[J].arXiv:2004.11795,2020.
[28]HU Z,NI Z,SHI J,et al.A Knowledge-enhanced Two-stage Generative Framework for Medical Dialogue Information Extraction[J].arXiv:2307.16200,2023.
[29]BHAYANA R,KRISHNA S,BLEAKNEY R R.Performance of ChatGPT on a radiology board-style examination:Insights into current strengths and limitations[J].Radiology,2023,307(5):230582.
[30]LU Y,LIU Q,DAI D,et al.Unified structure generation foruniversal information extraction[J].arXiv:2203.12277,2022.
[31]ZHAO L,ZENG W,XU W,et al.Give the truth:Incorporate semantic slot into abstractive dialogue summarization[C]//Findings of the Association for Computational Linguistics:EMNLP 2021.2021:2435-2446.
[32]LIU Z Y,SHI K,NANCY C.Coreference-Aware Dialogue Summarization[C]//Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue.2021.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!