Re: pools limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 16 juli 2019 kl 16:16 skrev M Ranga Swami Reddy <swamireddy@xxxxxxxxx>:
Hello - I have created 10 nodes ceph cluster with 14.x version. Can you please confirm below:
Q1 - Can I create 100+ pool (or more) on the cluster? (the reason is - creating a pool per project). Any limitation on pool creation?

Q2 - In the above pool - I use 128 PG-NUM - to start with and enable autoscale for PG_NUM, so that based on the data in the pool, PG_NUM will increase by ceph itself.


12800 PGs in total might be a bit much, depending on how many OSDs you have in total for these pools. OSDs aim for something like ~100 PGs per OSD at most, so for 12800 PGs in total, times 3 for replication=3 makes it necessary to have quite many OSDs per host. I guess the autoscaler might be working downwards for your pools instead of upwards. There is nothing wrong with starting with PG_NUM 8 or so, and have autoscaler increase the pools that actually do get a lot of data.

100 pools * repl = 3 * pg_num 8 => 2400 PGs, which is fine for 24 OSDs but would need more OSDs as some of those pools grow in data/objects.

100 * 3 * 128 => 38400 PGs, which requires 384 OSDs, or close to 40 OSDs per host in your setup. That might become a limiting factor in itself, sticking so many OSDs in a single box.
 
--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux