计算机科学 ›› 2016, Vol. 43 ›› Issue (9): 71-76.doi: 10.11896/j.issn.1002-137X.2016.09.013

• 2015 年第三届CCF 大数据学术会议 • 上一篇    下一篇

上下文分解机的自适应更新策略

姚杏,朱福喜,阳小兰,郑麟,刘世超   

  1. 武汉大学计算机学院 武汉430072,武汉大学计算机学院 武汉430072,武昌理工学院信息工程学院 武汉430223,武汉大学计算机学院 武汉430072,武汉大学计算机学院 武汉430072
  • 出版日期:2018-12-01 发布日期:2018-12-01
  • 基金资助:
    本文受国家自然科学基金(61272277),湖北省自然科学基金(2014CFB356)资助

Adaptive Parameters Updating Strategy of Context-aware Factorization Machines

YAO Xing, ZHU Fu-xi, YANG Xiao-lan, ZHENG Lin and LIU Shi-chao   

  • Online:2018-12-01 Published:2018-12-01

摘要: 分解机模型已经被成功应用于上下文推荐系统。在分解机模型的学习算法中,交替最小二乘法是一种固定其他参数只求单一参数最优值的学习算法,其参数数目影响计算复杂度。然而当特征数目很大时,参数数目随着特征数目急剧增加,导致计算复杂度很高;即使有些参数已经达到了最优值,每次迭代仍更新所有的参数。因此,主要改进了交替最小二乘法的参数更新策略,为参数引入自适应误差指标,通过权重和参数绝对误差共同决定该参数更新与否,使得每次迭代时重点更新最近两次迭代取值变化较大的参数。这种仅更新自适应误差大于阈值的参数的策略不但减少了需要更新的参数数目,进而加快了算法收敛的速度和缩短了运行时间,而且参数权重由误差决定,又修正了误差。在Yahoo和Movielens数据集上的实验结果证明:改进的参数更新策略运行效率有明显提高。

关键词: 分解机模型,交替最小二乘法,推荐系统,自适应误差

Abstract: Context-aware factorization machine has been successfully applied in the context-aware recommendation system.In the learning algorithm of factorization machines,alternating least-squares is a learning algorithm that fixes other parameters just to find the optimal value of a single parameter,and the number of parameters and the sample size will affect the computational complexity.However,when the number of features is large,the number of parameters increases along with the increase of the number of features,resulting in high computational complexity.Even though some parame-ters have achieved the optimal value,all parameters will be updated in each iteration.This paper mainly improved the para-meters updating strategy of alternating least-squares.Adaptive error index was introduced into the parameter.Updating the parameter or not is co-determined by the weights and the absolute error of parameters,so that each iteration focuses on parameters whose last two iterative values change greatly.This strategy only updates parameters whose adaptive errors are greater than the thresholds.It not only reduces the number of parameters that need to be updated,so as to accelerate the algorithm convergence speed and shorten the operation time,but also the weight of parameters is determined by the error,to correct the error.The results of experiments on Yahoo and Movielens data sets show that the effect of the improved parameter updating strategy is better.

Key words: Factorization machines,Alternating least-squares,Recommender systems,Adaptive error

[1] Sarwar B,Karypis G,Konstan J,et al.Item-based collaborativefiltering recommendation algorithms[C]∥Proc International Conference on the World Wide Web.ACM,2001:285-295
[2] Breese J,S D H,Kadie C.Empirical Analysis of Predictive Algo-rithms for Collaborative Filtering[C]∥Proc Conference on Uncertainty in Artificial Intelligence.1998:43-52
[3] Paterek A,Paterek A.Improving regularized singular value decomposition for collaborative filtering[C]∥Proceedings of Kdd Cup & Workshop.2007:58
[4] Srebro N,Rennie J D M,Jaakola T S.Maximum-Margin Matrix Factorization[J].Advances in Neural Information Processing Systems,2004,37(2):1329-1336
[5] Koren Y.Collaborative filtering with temporal dynamics[J].Communications of the ACM,2010,53(4):89-97
[6] Chih-Wei H,Chang C C,Lin C J.A practical guide to support vector classification[D].National Taiwan University,2010
[7] Klema V,Laub A J.The singular value decomposition:Its computation and some applications[J].IEEE Transactions on Automatic Control,1980,25(2):164-176
[8] Karatzoglou A,Amatriain X,Oliver N,et al.Multiverse recommendation:n-dimensional tensor factorization for context-aware collaborative filtering[C]∥Proceedings of the Fourth ACM Conference on Recommender Systems.2010:79-86
[9] Steffen R.Factorization machines [C]∥Proceedings of the 10th IEEE International Conference on Data Mining.IEEE Computer Society,2010:995-1000
[10] Cheng C,Xia F,Zhang T,et al.Gradient boosting factorization machines[C]∥Proceedings of the 8th ACM Conference on Re-commender Systems.ACM,2014:265-272
[11] Friedman J H.Greedy Function Approximation:A GradientBoosting Machine[J].Annals of Statistics,2000,29(5):1189-1232
[12] Riccardi A,Fernandez-Navarro F,Carloni S.Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine[J].Cybernetics IEEE Transactions on,2014,44(10):1898-1909
[13] Yu L,Liu H.Feature selection for high-dimensional data:a fast correlation-based filter solution[C]∥Proceedings of International Conferences on Machine Learning.2003:856-863
[14] H P,F L,C D.Feature selection based on mutual information:criteria of max-dependency,max-relevance,and min-redundancy[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2005,27(8):1226-1238
[15] Steffen R.Learning recommender systems with adaptive regularization[C]∥Fifth ACM International Conference on Web Search & Data Mining.2012:133-142
[16] Steffen R,Zeno G,Christoph F,et al.Fast context-aware recommendations with factorization machines[C]∥Proceedings of the 34th ACM SIGIR Conference on Research and Development in Information Retrieval.ACM,2011:635-644
[17] Steffen R.Scaling Factorization Machines to Relational Data[C]∥Proceedings of the 39th International Conference on Very Large Data Bases (VLDB 2013).Trento,Italy,2013:337-348
[18] Christoph F, Lars S T, Steffen R.Bayesian Factorization Ma-chines[C]∥Workshop on Sparse Representation and Low-rank Approximation,Neural Information Processing Systems (NIPS-WS).Granada,Spain,2011
[19] Cover T,Hart P.Nearest neighbor pattern classification[J].IEEE Transactions on Information Theory,1967,13(1):21-27
[20] Steffen R.Factorization Machines with libFM[J].ACM Transactions on Intelligent Systems and Technology,2012,3(3):451-458

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!