Ah yes, checked that too. Monitors and OSD's report with ceph config
show-with-defaults that bluefs_buffered_io is set to true as default
setting (it isn't overriden somewere).
On 1/27/23 17:15, Wesley Dillingham wrote:
I hit this issue once on a nautilus cluster and changed the OSD
parameter bluefs_buffered_io = true (was set at false). I believe the
default of this parameter was switched from false to true in release
14.2.20, however, perhaps you could still check what your osds are
configured with in regard to this config item.
Respectfully,
*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>
On Fri, Jan 27, 2023 at 8:52 AM Victor Rodriguez
<vrodriguez@xxxxxxxxxxxxx> wrote:
Hello,
Asking for help with an issue. Maybe someone has a clue about what's
going on.
Using ceph 15.2.17 on Proxmox 7.3. A big VM had a snapshot and I
removed
it. A bit later, nearly half of the PGs of the pool entered
snaptrim and
snaptrim_wait state, as expected. The problem is that such operations
ran extremely slow and client I/O was nearly nothing, so all VMs
in the
cluster got stuck as they could not I/O to the storage. Taking and
removing big snapshots is a normal operation that we do often and
this
is the first time I see this issue in any of my clusters.
Disks are all Samsung PM1733 and network is 25G. It gives us
plenty of
performance for the use case and never had an issue with the hardware.
Both disk I/O and network I/O was very low. Still, client I/O
seemed to
get queued forever. Disabling snaptrim (ceph osd set nosnaptrim)
stops
any active snaptrim operation and client I/O resumes back to normal.
Enabling snaptrim again makes client I/O to almost halt again.
I've been playing with some settings:
ceph tell 'osd.*' injectargs '--osd-max-trimming-pgs 1'
ceph tell 'osd.*' injectargs '--osd-snap-trim-sleep 30'
ceph tell 'osd.*' injectargs '--osd-snap-trim-sleep-ssd 30'
ceph tell 'osd.*' injectargs '--osd-pg-max-concurrent-snap-trims 1'
None really seemed to help. Also tried restarting OSD services.
This cluster was upgraded from 14.2.x to 15.2.17 a couple of
months. Is
there any setting that must be changed which may cause this problem?
I have scheduled a maintenance window, what should I look for to
diagnose this problem?
Any help is very appreciated. Thanks in advance.
Victor
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx