CephFS pg ratio of metadata/data pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We have a CephFS which its metadata pool and data pool share same set of OSDs. According to the PGs calculation:

(100*num_osds) / num_replica

If we have 56 OSDs, we should set 5120 PGs to each pool to make the data evenly distributed to all the OSDs. However, if we set metadata pool and data pool to both 5120 there will have warning about “too many pgs”. We currently set 2048 to metadata pool and data pool but it seems data may not evenly distribute to OSDs due to no sufficient PGs. Can we set a smaller PGs to metadata pool and larger PGs to data pool? E.g. 1024 pg to metadata and 4096 to data pool. Is there a recommend ratio? Will this result in any performance issue?

Thanks,
Tim
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux