Pg autoscaling and device_health_metrics pool pg sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I’m evaluating Ceph as a storage option, using ceph version 16.2.6, Pacific stable installed using cephadm. I was hoping to use PG autoscaling to reduce ops efforts. I’m standing this up on a cluster with 96 OSDs across 9 hosts.

The device_health_metrics pool was created by Ceph automatically once I started adding OSD  and created with 2048 PGs. This seems high, and put many PGs on each OSD. Documentation indicates that I should be targeting around 100 PGs per OSD, is that guideline out of date?

Also, when I created a pool to test erasure coded with a 6+2 config for CephFS with PG autoscaling enabled, it was created with 1PG to start, and didn’t scale up even as I loaded test data onto it, giving the entire CephFS the write performance of 1 single disk as it was only writing to 1 disk and backfilling to 7 others. Should I be manually setting default PGs at a sane level (512, 1024) or will autoscaling size this pool up? I have never seen any output from ceph osd pool autoscale-status when I am trying to see autoscaling information.

I’d appreciate some guidance about configuring PGs on Pacific.

Thanks,

Alex
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux