Re: Identify laggy PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good to know. Everything is bluestore and usually 5 spinners share an SSD
for block.db.
Memory should not be a problem. We plan with 4GB / OSD with a minimum of
256GB memory.

The pirmary affinity is a nice idea. I only thought about it in our s3
cluster, because the index is on SAS AND SATA SSDs and I use the SAS as
primary and the sata only for replication.

Am Sa., 17. Aug. 2024 um 15:23 Uhr schrieb Anthony D'Atri <
aad@xxxxxxxxxxxxxx>:

>
> Mostly when they’re spinners.  Especially back in the Filestore days with
> a colocated journal.  Don’t get me started on that.
>
> Too many PGs can exhaust RAM if you’re tight - or using Filestore still.
>
> For a SATA SSD I’d set pg_nums to average 200-300 per drive.  Your size
> mix complicates, though, because the larger OSDs will get many more than
> the smaller.   Be sure to set mon_max_pg_per_osd to like 1000.
>
> You might be experiment with primary affinity, so that the smaller OSDs
> are more likely to be primaries and thus will get more load.  I’ve seen a
> first-order approximation here increase read throughput by 20%
>
-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux