Re: Snap trimming best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Istvan,

our experience is the opposite. We put as many PGs in pools as the OSDs can manage. We aim for between 100 and 200 for HDDs and accept larger than 200 for SSDs. The smaller the PGs the better work all internal operations, including snaptrim, recovery, scrubbing etc. on our cluster.

We had problems with snaptrim on our file system taking more than a day and starting to overlap with the next day's snaptrim. After bumping the PG count this went away immediately. On a busy day (many TB deleted) a snaptrim takes maybe 2 hours on an FS with 3PB data, all on HDD, ca. 160 PGs/OSD.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Sent: 11 January 2023 09:06:51
To: Ceph Users
Subject:  Snap trimming best practice

Hi,

Wonder have you ever faced issue with snaptrimming if you follow ceph pg allocation recommendation (100pg/osd)?

We have a nautilus cluster and we scare to increase the pg-s of the pools because seems like even if we have 4osd/nvme, if the pg number is higher = the snaptrimming is slower.

Eg.:

We have these pools:
Db1 pool size 64,504G with 512 PGs
Db2 pool size 92,242G with 256 PGs
Db2 snapshot remove faster than Db1.

Our osds are very underutilized regarding pg point of view due to this reason, each osd is holding maximum 25 gigantic pgs which makes all the maintenance very difficult due to backfilling full, osd full issues.

Any recommendation if you use this feature?

Thank you

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux