On Mon, Dec 21, 2015 at 10:20 AM, Wido den Hollander <wido@xxxxxxxx> wrote: >>> > Oh, and to answer this part. I didn't do that much experimentation >>> > unfortunately. I actually am using about 24 index shards per bucket >>> > currently and we delete each bucket once it hits about a million >>> > objects. (it's just a throwaway cache for us) Seems ok, so i stopped >>> > tweaking. >>> > >>> >>> I have a use case where I need to store 350 Million objects in a single >>> bucket. >> >> How many OSDs are in that cluster? >> > > 1800 and it will grow towards 2500 in Q1 2016. > >>> I tested with 4096 shards and that works. Creating the bucket takes a >>> few seconds though. >> >> Does "that works" mean that you have actually uploaded 350M objects into >> that one bucket? >> > > No, still in progress. The bucket functions, that is what I meant. Yep. What's your OSD LevelDB size (overall size of the OSD omap directory)? Do you happen to have rest-bench results created when the cluster was empty, and if so, what does rest-bench look like after you inject, say, 100M objects? >> If so, can you give me a feel for your typical object size? >> > > It varies. It is a archiving solution and I'm not in control there. Is there a "typical" size at least by order of magnitude? Kilobytes? Tens, hundreds of KBs? MBs? >> Also, what's the performance drop you saw in bucket listing, vs. having >> fewer shards or no sharding at all? >> > > There is a drop in listing performance, didn't completely measure it, > but I think that with 4k shards the listing was a few seconds. Yeah, that sounds about expected. This would hurt if for some reason your use case involved having to list the bucket before inserting an object. > In this use-case we are not going to list the bucket, ever. Never say never. :) Cheers, Florian _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com