计算机科学 ›› 2024, Vol. 51 ›› Issue (6A): 230500203-7.doi: 10.11896/jsjkx.230500203

• 人工智能 • 上一篇    下一篇

融合标签知识的中文医学命名实体识别

尹宝生, 周澎   

  1. 沈阳航空航天大学人机智能研究中心 沈阳 110136
  • 发布日期:2024-06-06
  • 通讯作者: 周澎(1132187866@qq.com)
  • 作者简介:(541951941@qq.com)
  • 基金资助:
    辽宁省教育厅项目(LJKMZ20220536)

Chinese Medical Named Entity Recognition with Label Knowledge

YIN Baosheng, ZHOU Peng   

  1. Human-Machine Intelligence Research Center,Shenyang Aerospace University,Shenyang 110136,China
  • Published:2024-06-06
  • About author:YIN Baosheng,born in 1975,professor.His main research interests include deep learning and natural language processing.
    ZHOU Peng,born in 1999,postgra-duate.His main research interests include natural language processing and named entity recognition.
  • Supported by:
    Education Department of Liaoning Province,China(LJKMZ20220536).

摘要: 医学领域命名实体识别是信息抽取任务重要的研究内容之一,其训练数据主要来源于临床实验数据、健康档案、电子病历等非结构化文本,然而标注这些数据需要专业人员耗费大量人力、物力和时间资源。在缺乏大规模医学训练数据的情况下,医学领域命名实体识别模型很容易出现识别错误的情况。为解决这一难题,文中提出了一种融合标签知识的中文医学命名实体识别方法,即通过专业领域词典获得文本标签的释义后,分别将文本、标签及标签释义编码,基于自适应融合机制进行融合,有效平衡特征提取模块和语义增强模块的信息流,从而提高模型性能。其核心思想在于医学实体标签是通过总结归纳大量医学数据得到的,而标签释义是对标签进行科学解释和说明的结果,模型融入这些蕴含了丰富的医学领域内的先验知识,可以使其更准确地理解实体在医学领域中的语义并提升其识别效果。实验结果表明,该方法在中文医学实体抽取数据集(CMeEE-V2) 3个基线模型上分别取得了0.71%,0.53%和1.17%的提升,并且为小样本场景下的实体识别提供了一个有效的解决方案。

关键词: 中文医学命名实体识别, 标签知识, 先验知识, 自适应融合机制, 小样本

Abstract: Named entity recognition in the medical field is one of the important research contents of information extraction tasks.Its training data mainly comes from unstructured texts such as clinical trial data,health records,electronic medical records.However,labeling these data requires professionals to spend a lot of manpower,material resources and ime.In the absence of large-scale medical training data,named entity recognition models in the medical field are prone to recognition errors.In order to solve this problem,this paper proposes a Chinese medical named entity recognition method that integrates label knowledge,that is,after obtaining the interpretation of the text label through a professional field dictionary,the text,label and label interpretation are encoded separately,and the fusion is performed based on an adaptive fusion mechanism,to effectively balance the information flow of the feature extraction module and the semantic enhancement module,thereby improving the model performance.The core idea is that the medical entity label is obtained by summarizing a large amount of medical data,and the label interpretation is the result of scientific explanation and explanation of the label.The model incorporates these rich prior knowledge in the medical field to make it more accurate.Accurately understand the semantics of entities in the medical domain and improve their recognition.Experimental results show that the method has achieved 0.71%,0.53% and 1.17% improvement on the three baseline models of the Chinese medical entity extraction dataset(CMeEE-V2),and provides an effective method for entity recognition in small sample scenarios.

Key words: Chinese medical named entity recognition, Label knowledge, Prior knowledge, Adaptive fusion mechanism, Few shot

中图分类号: 

  • TP391
[1]LIN H,LU Y,TANG J,et al.A Rigorous Study on Named Entity Recognition:Can Fine-tuning Pretrained Model Lead to the Promised Land?[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).2020:7291-7300.
[2]LEE J,YOON W,KIM S,et al.BioBERT:a pre-trained biome-dical language representation model for biome-dical text mining[J].Bioinformatics,2019,36(4):1234-1240.
[3]MI F,ZHOU W,CAI F,et al.Self-training improves pre-trai-ning for few-shot learning in task-oriented dialog systems[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing(EMNLP).2021:1887-1898.
[4]HA S,KERSNER M,KIM B,et al.Marionette:Few-shot facereenactment preserving identity of unseen targets[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:10893-10900.
[5]WANG Y,YAO Q,KWOK J T,et al.Generalizing from a Few Examples:A Survey on Few-shot Learning[J].ACM computing surveys(csur),2020,53(3):1-34.
[6]LIUH,TAM D,MUQEETH M,et al.Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning[J].Advances in Neural Information Processing Systems,2022,35:1950-1965.
[7]OSAHOR U,NASRABADI N M.Ortho-shot:low displacement rank regularization with data augmentation for few-shot learning[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2022:2200-2209.
[8]SUN Q,LIU Y,CHUA T S,et al.Meta-Transfer Learning for Few-Shot Learning[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition(CVPR).2019:403-412.
[9]LUO X,XU J,XU Z.Channel importance matters in few-shotimage classification[J].In International Conference on Machine Learning(PMLR),2022,162:14542-14559.
[10]DIXIT M,KWITT R,NIETHAMMER M,et al.AGA:Attri-bute-Guided Augmentation[J].Proceedings of the IEEE Confe-rence on Computer Vision and Pattern Recognition,2017,35:7455-7463.
[11]SCHWARTZ E,KARLINSKY L,SHTOK J,et al.Delta-en-coder:an effective sample synthesis method for few-shot object recognition[J].Advances in neural information processing systems,2018,31:2850-2860.
[12]CHEN J,LIU Q,LIN H,et al.Few-shot named entity recognition with self-describing networks[J].arXiv:2203.12252,2022.
[13]LAI P,YE F,ZHANG L,et al.PCBERT:Parent and ChildBERT for Chinese Few-shot NER[C]//Proceedings of the 29th International Conference on Computational Linguistics.2022:2199-2209.
[14]MA T,JIANG H,WU Q,et al.Decomposed Meta-Learning for Few-Shot Named Entity Recognition[J].arXiv:2204.05751,2022.
[15]WANG J,WANG C,TAN C,et al.SpanProto:A Two-stageSpan-based Prototypical Network for Few-shot Named Entity Recognition[C]//Proceedings of the 2022 Conference on Empi-rical Methods in Natural Language Processing.2022:3466-3476.
[16]LI J,CHIU B,FENG S,et al.Few-shot named entity recognition via meta-learning[J].IEEE Transactions on Knowledge and Data Engineering,2020,34(9):4245-4256.
[17]MA J,BALLESTEROS M,DOSS S,et al.Label semantics for few shot named entity recognition[J].arXiv:2203.08985,2022.
[18]FRITZLER A,LOGACHEVA V,KRETOV M.Few-shot classification in named entity recognition task[C]//Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing.2019:993-1000.
[19]LI J,SUN A,HAN J,LI C.A survey on deep learning for named entity recognition[J].IEEE Transactions on Knowledge and Data Engineering,2020,34(1):50-70.
[20]CHURCH K W.Word2Vec[J].Natural Language Engineering,2017,23(1):155-162.
[21]PENNINGTON J,SOCHER R,MANNING C D.Glove:Global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Proces-sing(EMNLP),2014:1532-1543.
[22]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics(NAACL),2019,21:4171-4186.
[23]LU W,LI J,WANG J,et al.A CNN-BiLSTM-AM method forstock price prediction[J].Neural Computing and Applications,2021,33:4741-4753.
[24]LAFFERTY J,MCCALLUM A,PEREIRA F.Conditional random fields:probabillstic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Confe-rence on Machine Learning.2001:282-289.
[25]ZAN H Y,LI W X,ZHANG K L,et al.Building a PediatricMedical Corpus:Word Segmentation and Named Entity Annotation[C]//The 21st Chinese Lexical Semantics Workshop.2021:652-664.
[26]LU Y,LIU Q,DAI D,et al.Unified structure generation for universal information extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics(ACL).2022:5755-5772.
[27]LI Y,LIU L,SHI S.Empirical analysis of unlabeled entity pro-blem in named entity recognition[J].arXiv:2012.05426,2020.
[28]FU Y,LIN N,YANG Z,et al.Towards Malay named entity re-cognition:an open-source dataset and a multi-task framework[J].Connection Science,2023:35(1):2159014.
[29]LI X,FENG J,MENG Y,et al.A unified MRC framework fornamed entity recognition[J].arXiv:1910.11476,2019.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!