Computer Science ›› 2021, Vol. 48 ›› Issue (2): 1-12.doi: 10.11896/jsjkx.201000149

• New Distributed Computing Technologies and Systems • Previous Articles     Next Articles

Review on Performance Optimization of Ceph Distributed Storage System

ZHANG Xiao1,2,3, ZHANG Si-meng1,2, SHI Jia1,2, DONG Cong1,2, LI Zhan-huai1,2,3   

  1. 1 School of Computer Science,Northwestern Polytechnical University,Xi'an 710129,China
    2 MIIT Key Laboratory of Big Data Storage and Management,Northwestern Polytechnical University,Xi'an 710129,China
    3 National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology,Northwestern Polytechnical University,Xi'an 710129,China
  • Received:2020-10-16 Revised:2020-11-26 Online:2021-02-15 Published:2021-02-04
  • About author:ZHANG Xiao,born in 1978,Ph.D,is a member of China Computer Federation.His main research interests include storage systems,computer networks and distributed file systems.
  • Supported by:
    The National Key Research and Development Program(2018YFB1004401) and Beijing Natural Science Foundation (L192027).

Abstract: Ceph is a unified distributed storage system,which can provide storage services of 3 types of interfaces:block,file and object.Different from the traditional distributed storage system,it adopts the metadata management method without central node,so it has good scalability and linear growth performance.After more than ten years of development,Ceph has been widely used in cloud computing and big data storage systems.As the underlying platform of cloud computing,Ceph not only provides storage service for virtual machines,but also directly provides the object storage service and NAS file service.Ceph supports storage requirements of various operating systems and applications in cloud computing systems.Its performance has a great influence on virtual machines and applications running on it.Therefore,the performance optimization of the Ceph storage system has been a research hotspot in academia and industry.This paper first introduces the architecture and characteristics of Ceph,then summarizes existing performance optimization technologies from 3 aspects,including internal mechanism improvement,new hardware-orien-ted and application-based optimization and reviews the recent research on Ceph storage and optimization.Finally,it prospects the future work,hoping to provide a valuable reference for researchers in the performance optimization of distributed storage system.

Key words: Ceph distributed storage system, Non-volatile memory, Performance optimization, Solid state disk, Unified storage

CLC Number: 

  • TP319
[1] WEIL S,BRANDT S,MILLER E,et al.CRUSH:Controlled,scalable,decentralized placement of replicated data[C]//Proceedings of the 2006 ACM/IEEE Conference on Supercompu-ting.SC,2006:122.
[2] WEIL S,BRANDT S,MILLER E,et al.Ceph:A scalable,high-performance distributed file system[C]//7th USENIX Symposium on Operating Systems Design and Implementation(OSDI).2006:307-320.
[3] OPENSTACK ORG.2015:Openstack user survey [EB/OL].
[4] INTEL.Ceph Benchmark Tools [EB/OL].
[5] CEPHCOMMUNITY.Teuthology[EB/OL].
[6] WAN H T,LI Z H,ZHANG X.A Layered Perflormance Monitoring and Gathering Method of Cloud Storage[J].Joumal of Northwestem Polytechnical University,2016,34(3):529-535.
[7] ZHANG X,KONG L,ZHU S,et al.FSObserver:A Performance Measurement and Monitoring Tool for Distributed Storage Systems[C]//IFIP International Conference on Network and Parallel Computing.Springer,Cham,2018:142-147.
[8] ZHANG X,WANG Y Q,WANG Q,et al.A New Approach to Double I/O Performance for Ceph Distributed File System in Cloud Computing[C]//2019 2nd International Conference on Data Intelligence and Security (ICDIS).IEEE,2019:68-75.
[9] LEE D,JEONG K,HAN S,et al.Understanding Write Beha-viors of Storage Backends in Ceph Object Store[C]//IEEE Conference on Mass Storage Systems and Technologies.IEEE,2017,10.
[10] WEIL S.Bluestore:A New Storage Backend For Ceph[EB/OL].
[11] AGHAYEV A,WEIL S,KUCHNIK M,et al.File systems unfit as distributed storage backends:lessons from 10 years of Ceph evolution[C]//ACM SIGOPS 27th Symposium on Operating Systems Principles.ACM,2019:353-369.
[13] CEPH COMMUNITY.Tuning for All Flash Deployments [EB/OL].
[14] SATHIAMOORTHY M,ASTERIS M,PAPAILIOPOULOSD,et al.XORing Elephants:Novel Erasure Codes for Big Data[C]//39th International Conference on Very Large Data Bases (VLDB).VLDB Endowment,2013:325-336.
[15] SUNGJOON K,ZHANG J,MIRYEONG K,et al.Understan-ding System Characteristics of Online Erasure Coding on Scalable,Distributed and Large-Scale SSD Array Systems[C]//2017 IEEE International Symposium on Workload Characterization (IISWC).IEEE,2017:76-86.
[16] ZHOU Y.Ceph Erasure Coding Introduction [EB/OL].ht-tps://
[17] HAN Y,PARK S,LEE K.A dynamic message-Aware communication scheduler for Ceph storage system[C]//Proceedings-IEEE 1st International Workshops on Foundations and Applications of Self-Systems.IEEE,2016:60-65.
[18] BODON J,AWAIS K,SUNGYONG P.Async-LCAM:a lockcontention aware messenger for Ceph distributed storage system[J].Cluster Computing,2018,22(2):1386-7857.
[19] SONG U,JEONG B,PARK S,et al.Performance Optimization of Communication Subsystem in Scale-Out Distributed Storage[C]//2017 IEEE 2nd International Workshops on Foundations and Applications of Self Systems (FASW).IEEE,2017:263-268.
[20] GITHUB.msg/async:ibverbs/rdma support [EB/OL].
[21] WANG Y,YE M,HE Q,et al.A New Node Selecting Approach in Ceph Storage System Based on Software Defined Network and Multi-attributes Decision-making Model[J].Chinese Journal of Computers,2019,42(2):95-110.
[22] SHA H M,LIANG Y,JIANG W,et al.Optimizing Data Placement of MapReduce on Ceph-Based Framework under Load-Ba-lancing Constraint[C]//2016 IEEE 22nd International Confe-rence on Parallel and Distributed Systems(ICPADS).IEEE,2016:585-592.
[23] WANG L,ZHANG Y M,XU J W,et al.MAPX:Controlled Data Migration in the Expansion of Decentralized Object-Based Storage Systems[C]//18th USENIX Conference on File and Storage Technologies.FAST 20,2020:1-12.
[24] OH M,EOM J,YOON J,et al.Performance Optimization for All Flash Scale-Out Storage[C]//IEEE International Confe-rence on Cluster Computing.IEEE,2016:316-325.
[25] MEYER S,MORRISON J P.Impact of Single Parameter Changes on Ceph Cloud Storage Performance[J].Scalable Computing:Practice and Experience,2016,17(4):285-298.
[26] CAO Z,TARASOV V,TIWARI S.Towards better understan-ding of black-box auto-tuning:a comparative analysis for storage systems[C]//Proceedings of the 2018 Annual USENIX Technical Conference.Berkeley.USENIX Association,2018:893-907.
[27] CHEN Y,MAO Y C.Automatic tuning of Ceph parametersbased on random forest and genetic algorithm[J].Journal of Computer Applications,2020,40(2):347-351.
[28] INTEL.CeTunetools[EB/OL].
[29] Flash Memory Summit 2018:Ceph Optimizations for NVMe[EB/OL].
[30] CEPH COMMUNITY.Bluestore Advanced Performance Investigation[EB/OL].
[31] LU Y,ZHANG J,YANG Z,et al.OCStore:Accelerating Distributed Object Storage with Open-Channel SSDs[C]// 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS).IEEE,2019:271-281.
[32] PYDIPATY R,GEORGE J,SAHA A,et al.The Effect of Non Volatile Memory on a Distributed Storage System[C]//IEEE International Conference on High Performance Computing Data and Analytics.IEEE,2017:11-17.
[33] JIN Z S.Optimization of Distributed Storage on Commodity SSD using NVDIMM[D].Seoul:Graduate School of Seoul University,2017.
[34] PETERSON S.Using persistent memory and RDMA for Ceph client write-back caching[C]//Storage Developer Conference.SNIA,2019:24-27.
[35] WEIL S.Erasure Coding And Cache Tiering[EB/OL].
[36] STEFAN M,JOHN P M.Supporting Heterogeneous Pools in a Single Ceph Storage Cluster[C]//International Symposium on Symbolic & Numeric Algorithms for Scientific Computing.IEEE,2016:352-359.
[37] WU L,ZHUGE Q,SHA H M,et al.BOSS:An Efficient DataDistribution Strategy for Object Storage Systems with HybridDevices[J].IEEE Access,2017,5(1):23979-23993.
[38] LÜTTGAU J,KUHN M,DUWE K,et al.Survey of storagesystems for high performance computing[J].Supercomputing Frontiers and Innovations,2018,5(1):2313-8734.
[39] LIU J,KOZIOL Q,BUTLER G F,et al.Evaluation of HPC Application I/O on Object Storage Systems[C]//IEEE/ACM International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems.IEEE,2018:24-34.
[40] PATEL T,BYNA S,LOCKWOOD G K,et al.Uncovering Access,Reuse,and Sharing Characteristics of I/O-Intensive Files on Large-Scale Production HPC Systems[C]//18th Conference on File and Storage Technologies.Association,2020:91-101.
[41] JEONG K,DUFFY C,KIM J,et al.Optimizing the Ceph Distri-buted File System for High Performance Computing[C]//2019 27th Euromicro International Conference on Parallel,Distributed and Network-Based Processing (PDP).IEEE,2019:446-451.
[42] ZHAN L,FANG X,LI D,et al.The research and implementation of metadata cache backup technology based on CEPH file system[C]//International Conference on Cloud Computing.IEEE,2016:72-77.
[43] WANG L,WEN Y C.Optimization on Small File Performance for CephFS Distributed File System[EB/OL].
[44] ZHAN K,XU L,YUAN Z,et al.Performance Optimization of Large Files Writes to Ceph Based on Multiple Pipelines Algorithm[C]//2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications,Ubiquitous Computing & Communications,Big Data & Cloud Computing,Social Computing & Networking,Sustainable Computing & Communications(ISPA/IUCC/BDCloud/SocialCom/SustainCom).IEEE,2018:525-532.
[1] LIU Gao-cong, LUO Yong-ping, JIN Pei-quan. Accelerating Persistent Memory-based Indices Based on Hotspot Data [J]. Computer Science, 2022, 49(8): 26-32.
[2] CHEN Jun-wu, YU Hua-shan. Strategies for Improving Δ-stepping Algorithm on Scale-free Graphs [J]. Computer Science, 2022, 49(6A): 594-600.
[3] CHEN Le, GAO Ling, REN Jie, DANG Xin, WANG Yi-hao, CAO Rui, ZHENG Jie, WANG Hai. Adaptive Bitrate Streaming for Energy-Efficiency Mobile Augmented Reality [J]. Computer Science, 2022, 49(1): 194-203.
[4] E Hai-hong, ZHANG Tian-yu, SONG Mei-na. Web-based Data Visualization Chart Rendering Optimization Method [J]. Computer Science, 2021, 48(3): 119-123.
[5] FAN Peng-hao, HUANG Guo-rui, JIN Pei-quan. NVRC:Write-limited Logging for Non-volatile Memory [J]. Computer Science, 2021, 48(3): 130-135.
[6] WANG Xin-xin, ZHUGE Qing-feng, WU Lin. Method for Simulating and Verifying NVM-based In-memory File Systems [J]. Computer Science, 2020, 47(9): 74-80.
[7] XU Jiang-feng and TAN Yu-long. Research on HBase Configuration Parameter Optimization Based on Machine Learning [J]. Computer Science, 2020, 47(6A): 474-479.
[8] ZHANG Peng-yi, SONG Jie. Research Advance on Efficiency Optimization of Blockchain Consensus Algorithms [J]. Computer Science, 2020, 47(12): 296-303.
[9] XU Chuan-fu,WANG Xi,LIU Shu,CHEN Shi-zhao,LIN Yu. Large-scale High-performance Lattice Boltzmann Multi-phase Flow Simulations Based on Python [J]. Computer Science, 2020, 47(1): 17-23.
[10] WANG Tao, LIANG Xiao, WU Qian-qian, WANG Peng, CAO Wei, SUN Jian-ling. Logless Hash Table Based on NVM [J]. Computer Science, 2019, 46(9): 66-72.
[11] ZHANG Ling-hao, GUI Sheng-lin, MU Feng-jun, WANG Sheng. Clone Detection Algorithm for Binary Executable Code with Suffix Tree [J]. Computer Science, 2019, 46(10): 141-147.
[12] XU Qi-ze, HAN Wen-ting, CHEN Jun-shi, AN Hong. Optimization of Breadth-first Search Algorithm Based on Many-core Platform [J]. Computer Science, 2019, 46(1): 314-319.
[13] LI Yue,WANG Fang. Survey on Storage Security of Emerging Non-volatile Memory [J]. Computer Science, 2018, 45(7): 53-60.
[14] HOU Ze-yi, WAN Hu, XU Yuan-chao. NMST:A Persistent Memory Management Optimization Approach Based on Segment Tree [J]. Computer Science, 2018, 45(7): 78-83.
[15] SUN Qiang, ZHUGE Qing-feng, CHEN Xian-zhang, Edwin H.-M.SHA, WU Lin. In-page Wear-leveling Memory Management Based on Non-volatile Memory [J]. Computer Science, 2018, 45(11A): 505-510.
Full text



No Suggested Reading articles found!