Re: Inherited CEPH nightmare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I agree with Janne’s thoughts here, especially since you’re on SSDs. 

> On Oct 12, 2022, at 03:38, Janne Johansson <icepic.dz@xxxxxxxxx> wrote:
> 
>> I've changed some elements of the config now and the results are much better but still quite poor relative to what I would consider normal SSD performance.
> The number of PGs has been increased from 128 to 256.  Not yet run JJ Balancer.
>> In terms of performance, I measured the time it takes for ProxMox to clone a 127GB VM. It now clones in around 18 minutes, rather than 1 hour 55 mins before the config changes, so there is progress here.
>> Any other suggestions are welcome.
>> root@cl1-h1-lv:~# ceph osd df
>> ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP      META     AVAIL    %USE   VAR   PGS  STATUS
>> 4    ssd  0.90970   1.00000  932 GiB  635 GiB  632 GiB   1.1 MiB  2.5 GiB  297 GiB  68.12  1.03   79      up
>> 9    ssd  0.90970   1.00000  932 GiB  643 GiB  640 GiB    62 MiB  2.1 GiB  289 GiB  68.98  1.05   81      up
> 
> It would be possible (and perhaps improve a slight bit more) to allow
> even more PGs to the large pools, you have around 80 PGs per OSD now,
> and between 100-200 is supposed to be an ok figure, given all pools,
> so if you are at ~80 now with 256 PGs on the main pool, you could bump
> it to 512 unless you plan to add lots more pools later without
> expanding the amount of OSDs.
> 
> Not a huge win, but more "placing it at the middle of the comfort
> zone" in terms of "slightly faster scrubs", "spread work around
> several OSDs when one large operation is requested" and so on.
> 
> -- 
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux