Re: Planning for many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



RADOS doesn't store a list of objects. The RADOS Gateway uses a separate data format on top of objects stored in RADOS, and it keeps a per-user list of buckets and a per-bucket index of objects as "omap" objects in the OSDs (which ultimately end up in a leveldb store). A bucket index is currently a single object (stored on one OSD), so that is a performance (and, much later, storage) bottleneck that you can run into with extremely large buckets that see enough traffic. We don't have a precise number (it depends in large part on how powerful your OSDs are) but it's somewhere in the many millions of objects. 
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Monday, March 11, 2013 at 4:28 PM, Rustam Aliyev wrote:

> Thanks Sam,
> 
> That's great. I'm trying to understand a bit of RADOS internals and I 
> went through architecture wiki. Yet unclear about some points.
> 
> Where does RADOS store the list of objects (object metadata)? According 
> to RADOSGW docs S3 bucket listing is available, so it must be stored 
> somewhere.
> 
> Thanks,
> Rustam.
> 
> On 11/03/2013 20:28, Sam Lang wrote:
> > On Fri, Mar 8, 2013 at 9:57 PM, Rustam Aliyev <rustam.lists@xxxxxxx (mailto:rustam.lists@xxxxxxx)> wrote:
> > > Hi,
> > > 
> > > We need to store ~500M of small files (<1MB) and we were looking to RadosGW
> > > solution. We expect about 20 ops/sec (read+write). I'm trying to understand
> > > how Monitoring nodes store Crush maps and what are the limitations.
> > > 
> > > For instance, is there any recommended max number of objects per Monitoring
> > > node? Does adding more monitoring nodes will help to scale number of small
> > > files and iops (assuming that OSDs not a bottleneck)?
> > 
> > 
> > No, the number of objects is unrelated to the monitors, so 3 monitors
> > should suffice in your case.
> > -sam
> > 
> > > Many thanks,
> > > Rustam.
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx (mailto:ceph-users@xxxxxxxxxxxxxx)
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx (mailto:ceph-users@xxxxxxxxxxxxxx)
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux