Re: Very slow snaptrim operations blocking client I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



How is your pg distribution on your osd devices? Do you have enough assigned pgs?

Istvan Szabo
Staff Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2023. Jan 27., at 23:30, Victor Rodriguez <vrodriguez@xxxxxxxxxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Ah yes, checked that too. Monitors and OSD's report with ceph config
show-with-defaults that bluefs_buffered_io is set to true as default
setting (it isn't overriden somewere).


On 1/27/23 17:15, Wesley Dillingham wrote:
I hit this issue once on a nautilus cluster and changed the OSD
parameter bluefs_buffered_io = true (was set at false). I believe the
default of this parameter was switched from false to true in release
14.2.20, however, perhaps you could still check what your osds are
configured with in regard to this config item.

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Fri, Jan 27, 2023 at 8:52 AM Victor Rodriguez
<vrodriguez@xxxxxxxxxxxxx> wrote:

   Hello,

   Asking for help with an issue. Maybe someone has a clue about what's
   going on.

   Using ceph 15.2.17 on Proxmox 7.3. A big VM had a snapshot and I
   removed
   it. A bit later, nearly half of the PGs of the pool entered
   snaptrim and
   snaptrim_wait state, as expected. The problem is that such operations
   ran extremely slow and client I/O was nearly nothing, so all VMs
   in the
   cluster got stuck as they could not I/O to the storage. Taking and
   removing big snapshots is a normal operation that we do often and
   this
   is the first time I see this issue in any of my clusters.

   Disks are all Samsung PM1733 and network is 25G. It gives us
   plenty of
   performance for the use case and never had an issue with the hardware.

   Both disk I/O and network I/O was very low. Still, client I/O
   seemed to
   get queued forever. Disabling snaptrim (ceph osd set nosnaptrim)
   stops
   any active snaptrim operation and client I/O resumes back to normal.
   Enabling snaptrim again makes client I/O to almost halt again.

   I've been playing with some settings:

   ceph tell 'osd.*' injectargs '--osd-max-trimming-pgs 1'
   ceph tell 'osd.*' injectargs '--osd-snap-trim-sleep 30'
   ceph tell 'osd.*' injectargs '--osd-snap-trim-sleep-ssd 30'
   ceph tell 'osd.*' injectargs '--osd-pg-max-concurrent-snap-trims 1'

   None really seemed to help. Also tried restarting OSD services.

   This cluster was upgraded from 14.2.x to 15.2.17 a couple of
   months. Is
   there any setting that must be changed which may cause this problem?

   I have scheduled a maintenance window, what should I look for to
   diagnose this problem?

   Any help is very appreciated. Thanks in advance.

   Victor


   _______________________________________________
   ceph-users mailing list -- ceph-users@xxxxxxx
   To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux