Bill,
I've run into a similar issue with objects averaging ~100KiB. The explanation I received on IRC is that there are scaling issues if you're uploading them all to the same bucket because the index isn't sharded. The recommended solution is to spread the objects out to a lot of buckets. However, that ran me into another issue once I hit 1000 buckets which is a per user limit. I switched the limit to be unlimited with this command:
radosgw-admin user modify --uid=your_username --max-buckets=0
Bryan
On Wed, Sep 4, 2013 at 11:27 AM, Bill Omer <bill.omer@xxxxxxxxx> wrote:
I'm testing ceph for storing a very large number of small files. I'm seeing some performance issues and would like to see if anyone could offer any insight as to what I could do to correct this.Some numbers:Uploaded 184111 files, with an average file size of 5KB, using 10 separate servers to upload the request using Python and the cloudfiles module. I stopped uploading after 53 minutes, which seems to average 5.7 files per second per node.My storage cluster consists of 21 OSD's across 7 servers, with their journals written to SSD drives. I've done a default installation, using ceph-deploy with the dumpling release.I'm using statsd to monitor the performance, and what's interesting is when I start with an empty bucket, performance is amazing, with average response times of 20-50ms. However as time goes on, the response times go in to the hundreds, and the average number of uploads per second drops.I've installed radosgw on all 7 ceph servers. I've tested using a load balancer to distribute the api calls, as well as pointing the 10 worker servers to a single instance. I've not seen a real different in performance with this either.Each of the ceph servers are 16 core Xeon 2.53GHz with 72GB of ram, OCZ Vertex4 SSD drives for the journals and Seagate Barracuda ES2 drives for storage.Any help would be greatly appreciated.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Bryan Stillwell SENIOR SYSTEM ADMINISTRATOR E: bstillwell@xxxxxxxxxxxxxxx O: 303.228.5109 M: 970.310.6085 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com