Re: MDS Performance and PG/PGP value

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hello
>
> As previously describe here, we have a full-flash NVME ceph cluster (16.2.6) with currently only cephfs service configured.
[...]
> We noticed that cephfs_metadata pool had only 16 PG, we have set autoscale_mode to off and increase the number of PG to 256 and with this
> change, the number of SLOW message has decreased drastically.
>
> Is there any mechanism to increase the number of PG automatically in such a situation ? Or this is something to do manually ?
>

https://ceph.io/en/news/blog/2022/autoscaler_tuning/


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux