Computer Science ›› 2024, Vol. 51 ›› Issue (11A): 231000084-7.doi: 10.11896/jsjkx.231000084

• Intelligent Computing • Previous Articles     Next Articles

Clinical Findings Recognition and Yin & Yang Status Inference Based on Doctor-Patient Dialogue

LIN Haonan1, TAN Hongye1,2, FENG Huimin1   

  1. 1 School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China
    2 Key Laboratory of Ministry of Education Intelligence and Chinese Information Processing,Shanxi University,Taiyuan 030006,China
  • Online:2024-11-16 Published:2024-11-13
  • About author:LIN Haonan,born in 1998,M.S.,is a member of CCF(No.A1832G).His main research interests include natural language processing,smart healthcare,etc.
    TAN Hongye,born in 1971,Ph.D,professor,is a member of CCF(No.E200022704M).Her main research interests include natural language processing,smart healthcare,smart education etc.
  • Supported by:
    General Program of the National Natural Science Foundation of China(62076155).

Abstract: Clinical findings recognition and Yin & Yang status inference are import tasks in the field of intelligent healthcare.The goal is to identify clinical findings such as diseases and symptoms from doctor-patient dialogue record,then determine their Yin & Yang status.The main weakness of existing research is as follow:(1)Lack of modeling of semantic information and dialogue structure in doctor-patient dialogues,leading to low model accuracy.(2)Implementing it as a two-stage process,it will cause error accumulation.This paper proposes a unified generative method that incorporates dialogue information.It achieves this by constructing a static-dynamic fusion graph to model semantic and structural information in doctor-patient dialogues,enhancing the model's understanding of conversations.And utilizes a generative language model to unify clinical findings recognition andYin & Yang status inference into a sequence generation task,mitigating the problem of error accumulation.Additionally,it improves the accuracy of Yin & Yang statue inference by identifying Yin & Yang statue indicator words.Experimental results on the CHIP2021 evaluation dataset CHIP-MDCFNPC show that the proposed method achieves an F1 score of 71.83%,which is 2.82% higher than the baseline model on average.

Key words: Doctor-Patient dialogue, Clinical findings recognition, Yin & Yang statue inference, Dialogue modeling, Unified generative model

CLC Number: 

  • TP391
[1]DU N,CHEN K,KANNAN A,et al.Extracting symptoms and their status from clinical conversations[J].arXiv:1906.02239,2019.
[2]DU N,WANG M,TRAN L,et al.Learning to infer entities,properties and their relations from clinical conversations[J].arXiv:1908.11536,2019.
[3]LIN X,HE X,CHEN Q,et al.Enhancing dialogue symptom di-agnosis with global attention and symptom graph[C]//Procee-dings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP).2019:5033-5042.
[4]FINLEY G,SALLOUM W,SADOUGHI N,et al.From dicta-tions to clinical reports using machine translation[C]//Procee-dings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 3(Industry Papers).2018:121-128.
[5]LI M,XIANG L,KANG X,et al.Medical Term and Status Ge-neration From Chinese Clinical Dialogue With Multi-Granularity Transformer[J].IEEE/ACM Transactions on Audio,Speech,and Language Processing,2021,29:3362-3374.
[6]HUANG Z,XU W,YU K.Bidirectional LSTM-CRF models for sequence tagging[J].arXiv:1508.01991,2015.
[7]ZHANG Y,JIANG Z,ZHANG T,et al.MIE:A medical information extractor towards medical dialogues[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.2020:6460-6469.
[8]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isall you need[J].arXiv:1706.03762,2017.
[9]ELMAN J L.Finding structure in time[J].Cognitive science,1990,14(2):179-211.
[10]HOCHREITER S,SCHMIDHUBER J.Long short-term memory[J].Neural computation,1997,9(8):1735-1780.
[11]RAFFEL C,SHAZEER N,ROBERTS A,et al.Exploring the limits of transfer learning with a unified text-to-text transformer[J].J.Mach.Learn.Res.,2020,21(140):1-67.
[12]LEWIS M,LIU Y,GOYAL N,et al.BART:Denoising Se-quence-to-Sequence Pre-training for Natural Language Generation,Translation,and Comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.2020:7871-7880.
[13]RADFORD A,NARASIMHAN K,SALIMANS T,et al.Improving language understanding by generative pre-training[J/OL].https://openai.com/research/language-unsupervised.
[14]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI blog,2019,1(8):9.
[15]BROWN T,MANN B,RYDER N,et al.Language models are few-shot learners[J].Advances in Neural Information Proces-sing Systems,2020,33:1877-1901.
[16]CHEN J,YANG D.Structure-aware abstractive conversationsummarization via discourse and action graphs[J].arXiv:2104.08400,2021.
[17]FENG X,FENG X,QIN B,et al.Dialogue discourse-awaregraph convolutional networks for abstractive meeting summarization[J].arXiv:2012.03502,2020.
[18]CHEN J,YANG D.Multi-view sequence-to-sequence modelswith conversational structure for abstractive dialogue summarization[J].arXiv:2010.01672,2020.
[19]ZHAO L,XU W,GUO J.Improving abstractive dialogue summarization with graph structures and topic words[C]//Procee-dings of the 28th International Conference on Computational Linguistics.2020:437-449.
[20]ZHAO L,ZHENG F,HE K,et al.Todsum:Task-oriented dialogue summarization with state tracking[J].arXiv:2110.12680,2021.
[21]GOO C W,CHEN Y N.Abstractive dialogue summarizationwith sentence-gated modeling optimized by dialogue acts[C]//2018 IEEE Spoken Language Technology Workshop(SLT).IEEE,2018:735-742.
[22]GAO S,CHENG X,LI M,et al.Dialogue Summarization with Static-Dynamic Structure Fusion Graph[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics(Volume 1:Long Papers).2023:13858-13873.
[23]SHI Z,HUANG M.A deep sequential model for discourse parsing on multi-party dialogues[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:7007-7014.
[24]LIU W,ZHOU P,ZHAO Z,et al.K-bert:Enabling languagerepresentation with knowledge graph[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:2901-2908.
[25]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018.
[26]LAFFERTY J,MCCALLUM A,PEREIRA F C N.Conditional random fields:Probabilistic models for segmenting and labeling sequence data[C]//ICMI.2001.
[27]LI X,YAN H,QIU X,et al.FLAT:Chinese NER using flat-lattice transformer[J].arXiv:2004.11795,2020.
[28]HU Z,NI Z,SHI J,et al.A Knowledge-enhanced Two-stage Generative Framework for Medical Dialogue Information Extraction[J].arXiv:2307.16200,2023.
[29]BHAYANA R,KRISHNA S,BLEAKNEY R R.Performance of ChatGPT on a radiology board-style examination:Insights into current strengths and limitations[J].Radiology,2023,307(5):230582.
[30]LU Y,LIU Q,DAI D,et al.Unified structure generation foruniversal information extraction[J].arXiv:2203.12277,2022.
[31]ZHAO L,ZENG W,XU W,et al.Give the truth:Incorporate semantic slot into abstractive dialogue summarization[C]//Findings of the Association for Computational Linguistics:EMNLP 2021.2021:2435-2446.
[32]LIU Z Y,SHI K,NANCY C.Coreference-Aware Dialogue Summarization[C]//Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue.2021.
[1] KA Zuming, ZHAO Peng, ZHANG Bo, FU Xiaoning. Survey of Recommender Systems for Large Language Models [J]. Computer Science, 2024, 51(11A): 240800111-11.
[2] SUI Haoran, ZHOU Xiaohang, ZHANG Ning. Product Improvement Based on UGC:Review on Methods and Applications of Attribute Extractionand Attribute Sentiment Classification [J]. Computer Science, 2024, 51(11A): 240400070-9.
[3] WANG Yuhan, MA Fuyuan, WANG Ying. Construction of Fine-grained Medical Knowledge Graph Based on Deep Learning [J]. Computer Science, 2024, 51(11A): 230900157-7.
[4] FU Mingrui, LI Weijiang. Multi-task Emotion-Cause Pair Extraction Method Based on Position-aware Interaction Network [J]. Computer Science, 2024, 51(11A): 231000086-9.
[5] WANG Hao , WU Junhua. Regular Expression Generation Based on Natural Language Syntax Information [J]. Computer Science, 2024, 51(11A): 231200017-6.
[6] QIN Xianping, DING Zhaoxu, ZHONG Guoqiang, WANG Dong. Deep Learning-based Method for Mining Ocean Hot Spot News [J]. Computer Science, 2024, 51(11A): 231200005-10.
[7] LIN Huang, LI Bicheng. Aspect-based Sentiment Analysis Based on BERT Model and Graph Attention Network [J]. Computer Science, 2024, 51(11A): 240400018-7.
[8] XIANG Heng, YANG Mingyou, LI Meng. Study on Named Entity Recognition of NOTAM Based on BiLSTM-CRF [J]. Computer Science, 2024, 51(11A): 240300148-6.
[9] GUO Ruiqiang, JIA Xiaowen, YANG Shilong, WEI Qianqiang. Multi-task Learning Model for Text Feature Enhancement in Medical Field [J]. Computer Science, 2024, 51(11A): 240200041-7.
[10] GAO Weijun, SUN Zibi, LIU Shujun. Sentiment Analysis of Image-Text Based on Multiple Perspectives [J]. Computer Science, 2024, 51(11A): 231200163-8.
[11] PANG Bowen, CHEN Yifei, HUANG Jia. Fine-grained Entity Recognition Model in Audit Domain Based on Adversarial Migration ofSample Contributions [J]. Computer Science, 2024, 51(11A): 240300197-8.
[12] SONG Ziyan, LUO Chuan, LI Tianrui, CHEN Hongmei. Classification of Thoracic Diseases Based on Attention Mechanisms and Two-branch Networks [J]. Computer Science, 2024, 51(11A): 230900116-6.
[13] ZHAO Yanli, XING Yitong, LI Xiaomin, SONG Cai, WANG Peipei. Study on Automatic Segmentation Method of Retinal Blood Vessel Images [J]. Computer Science, 2024, 51(11A): 231000061-7.
[14] HU Yimin, Qu Guang, WANG Xiabing, ZHANG Jie, LI Jiadong. EO-YOLOX Model for Insulators Detection in Transmission Lines [J]. Computer Science, 2024, 51(11A): 240200107-6.
[15] ZHANG Feng. Graphical LCD Pixel Defect Detection Algorithm Based on Improved YOLOV8 [J]. Computer Science, 2024, 51(11A): 240100162-7.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!