计算机科学 ›› 2009, Vol. 36 ›› Issue (10): 222-224.

• 人工智能 • 上一篇    下一篇

汉语统计语言模型训练样本容量的定量化度量

张仰森   

  1. (北京信息科技大学智能信息处理研究所 北京 100192)
  • 出版日期:2018-11-16 发布日期:2018-11-16
  • 基金资助:
    本文受国家自然科学基金(60873013),北京市自然科学基金B类重点项目(KZ200811232019),中国科学院自动化研究所模式识别国家重点实验室开放基金,北京市属市管高校人才强教计划项目资助。

Statistical Language Model

ZHANG Yang-sen   

  • Online:2018-11-16 Published:2018-11-16

摘要: 统计语言模型的参数训练是语言建模的关键,选择多大的训练样本就能够达到建模的参数估计误差要求,是语言建模理论关心的问题之一。应用数理统计理论对汉语统计语言模型的训练语料样本容量进行了定量化描述,给出了汉语n-gram模型训练样本容量下界的估算方法及量化估算公式,可根据模型参数估计的误差要求计算出模型训练所需的样本容量。

关键词: 汉语统计语言模型,训练语料样本,样本容量,相对误差

Abstract: The training of statistical language model parameter is the key of language modeling. Chooseing how many training samples to meet the demand of the model parameter estimation error is one of concern problems of language modeling theory. We applied mathematical statistics theory to give the estimating method for training samples lower bound capability for Chinese model, the quantification estimation formula was suggested. By using this formula, the corpus sample capability needed to train model parameters can be calculated according to the demand of parameter estimation error.

Key words: Chinese statistical language model, Training corpus sample, Sample capacity, Relative error

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!