Computer Science ›› 2015, Vol. 42 ›› Issue (1): 54-58.doi: 10.11896/j.issn.1002-137X.2015.01.012

Previous Articles     Next Articles

Design and Optimization of Storage System in HEP Computing Environment

CHENG Yao-dong, WANG Lu, HUANG Qiu-lan and CHEN Gang   

  • Online:2018-11-14 Published:2018-11-14

Abstract: High energy physics computing is a typical data-intensive application,and the performance of data access throughput is critical to the computing system.The performance of data access is closely related to the computing model of application.This paper firstly analyzed the typical high energy physics computing models,and then summarized the characteristics of data access.Based on these characteristics,some optimization measures were proposed,including opera-ting system I/O scheduling policy,distributed file system cache configuration and so on.Data access performance and CPU utilization are improved significantly after optimization.Metadata management,data reliability,scalability and othermanageability functions are also important for large-scale storage system.Considering some shortcomings of existing Lustre parallel file system,this paper finally proposed Gluster storage system as a new solution for high energy physics.After tuning of some key factors,such as data management and scalability,the system has been put into use.It is de-monstrated that the data access performance with better scalability and reliability can meet the needs of high-energy physi-cs computing.

Key words: HPC,Mass storage system,Lustre,Gluster

[1] WLCG -Worldwide LHC Computing Grid.http://lcg.web.cern.ch/LCG,2013.7
[2] Fuhrmann P,Gülzow V.dCache,storage system for the future[C]∥Euro-Par 2006 Parallel Processing.Springer Berlin Heidelberg,2006:1106-1113
[3] Peters A J,Janyst L.Exabyte Scale Storage at CERN[J].Journal of Physics Conference Series,2011,331(5)
[4] Schmuck F,Haskin R.GPFS:A Shared-Disk File System forLarge Computing Clusters[C]∥Proceedings of the Conference on File and Storage Technologies (FAST’02).Monterey,CA,January 2002:231-244
[5] Schwan P.Lustre:Building a file system for 1000-node clusters[C]∥Proceedings of the 2003 Linux Symposium.2003
[6] IOzone Filesystem Benchmark.http://www.iozone.org
[7] Shakshober D J.Choosing an I/O scheduler for Red Hat Enterprise Linux 4 and the 2.6 kernel [M].Red Hat magazine,2005
[8] Gluster web site.http://www.gluster.org
[9] 罗象宏,舒继武.存储系统中的纠删码研究综述[J].计算机研究与发展,2012,49(1):1-11

No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!