Pool Sizes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



More newbie questions about librados...

I am making design decisions now that I want to scale to really big sizes in the future, and so need to understand where size limits and performance bottlenecks come from. Ceph has a reputation for being able to scale to exabytes, but I don't see much on how one should sensibly get to such scales. Do I make big objects? Pools with lots of objects in them? Lots of pools? A pool that has a thousand objects of a megabyte each vs. a pool that has a million objects or a thousand bytes each: why should one take one approach and when should one take the other? How big can a pool get? Is a billion objects a lot, something that Ceph works to handle, or is it something Ceph thinks is no big deal? Is a trillion objects a lot? Is a million pools a lot? A billion pools? How many is "lots" for Ceph?

I plan to accumulate data indefinitely, I plan to add cluster capacity on a regular schedule, I want performance that doesn't degrade with size.

Where do things break down? What is the wrong way to scale Ceph?

Thanks,

-kb, the Kent who guesses putting all his data in a single xattr or single RADOS object would be the wrong way.

P.S. Happy New Year!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux