Re: PG Ratio for EC overwrites Pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



PG count isn’t just about storage size, it also affects performance, parallelism, and recovery.

You want pgp_num for RBD metadata pool to be at the VERY least the number of OSDs it lives on, rounded up to the next power of 2.  I’d probably go for at least (2x#OSD) rounded up.  If you have two few, your metadata operations will contend with each other.

> On Nov 3, 2022, at 10:24, mailing-lists <mailing-lists@xxxxxxxxx> wrote:
> 
> Dear Ceph'ers,
> 
> I am wondering on how to choose the number of PGs for a RBD-EC-Pool.
> 
> To be able to use RBD-Images on a EC-Pool, it needs to have an regular RBD-replicated-pool, as well as an EC-Pool with EC overwrites enabled, but how many PGs would you need for the RBD-replicated-pool. It doesn't seem to eat a lot of storage, so if I'm not mistaken, it could be actually a quite low number of PGs, but is this recommended? Is there a best practice for this?
> 
> 
> Best
> 
> Ken
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux