RadosGW Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm trying to evaluate various object stores/distributed file systems for use in our company and have a little experience of using Ceph in the past. However I'm running into a few issues when running some benchmarks against RadosGW.


Basically my script is pretty dumb, but it captures one of our primary use cases reasonably accurately -  it iteratively copies files repeatedly onto a different key either in s3, or to a hierarchical directory structure on a block device (eg 000/000/000/001/1.jpg) where the directory is a key. When adding to an s3-esque object store, it uses the same scheme to generate the key for the file.

Now when running this script against an RBD volume I get high hundreds of MB/s throughput quite happily particularly if I run the process in parallel (forking the process multiple times). However if I try to bludgeon the script to use the s3 interface via radosgw, everything grinds to a halt (read 0.5MB/s throughput per fork). This is a problem. I don't believe that the discrepancy is due to anything other than a misconfiguration.

The test cluster is running with 3 nodes, 86 drives/OSDs each (they are currently 6tb). Our use case requires the storage density to be high. HW wise, there is 256GB Ram with 2 12Core E5-2690 v3 @ 2.60GHz, so more than enough CPU/Ram capacity.

Currently I have RadosGW running on one of the nodes with Apache 2.4.7 acting as the proxy.

Any suggestions/pointers would be more than welcome, as ceph is high on our list of favourites due to its feature set. It definitely should be performing faster than this.

Regards

Stuart Harland






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux