Re: FW: RGW performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you are on >=hammer builds, you might want to consider the option of using 'rgw_num_rados_handles', which opens up more handles to the cluster from RGW. This would help in scenarios, where you have enough number of OSDs to drive the cluster bandwidth, which I guess is the case with you.

 

Thanks,

-Pavan.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of ?????? ????????
Sent: Thursday, November 12, 2015 1:51 PM
To: ceph-users@xxxxxxxxxxxxxx
Subject: FW: RGW performance issue

 

Hello,

 

We are building a cluster for archive storage. We plan to use an Object Storage (RGW) only, no Block Devices and File System. We doesn’t require high speed, so we are using old weak servers (4 cores, 3 GB RAM) with new huge but slow HDDs (8TB, 5900rpm). We have 3 storage nodes with 24 OSDs totally now and 3 RGW nodes based on a default Civetweb engine.

 

We have got about 50 MB/sec “raw” write speed with librados-level benches (measured by rados bench, rados put), and it’s quite enough for us. However, RGW performance is dramatically low: no more than 5 MB/sec for file uploading via s3cmd and swift client. It’s too slow for ours tasks and it’s abnormally slow compared with librados write speed, imho.

 

Write speed is a most important for us now, our the first goal is to download about 50 TB of archive data from a public cloud to our promise storage. We need no less than 20 MB/sec of write speed.

 

Can anybody help my with RGW performance? Who use RGW, what performance penalty does it give? And where to find the cause of the problem? I have checked all performance counters what I know and I haven’t found any critical values.

 

Thanks.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux