Hello Frank, We've observed seemingly identical issue when a `fstrim` is carried out on one of the RBD-backed iSCSI multipath device (we use ceph-iscsi to map RBD image to local multipath device which is formatted in XFS filesystem). BTW, we use Nautilus 14.2.22. ------------------ Original ------------------ From: "Frank Schilder"<frans@xxxxxx>; Date: Thu, Feb 10, 2022 09:19 PM To: "ceph-users"<ceph-users@xxxxxxx>; Subject: IO stall after 1 slow op Dear cephers, I'm observing something strange on our cluster. A couple of times a day I see a near-complete IO stall on out pools backing RBD devices used by libvirt/KVM. The scenario is always the same, 1 op is blocked and 30-40 seconds later an avalanche of blocked ops piles up while collective IO comes to a grinding halt for 60-90 seconds. After this time, the first slow op finally completes and things start moving again. I have an example of such a blocked OP below, I can't see anything special about it except its exceptionally high duration. This phenomenon is randomly distributed over all drives and drives types (SSD,HDD) in the cluster, so I don't think it is a failing drive that causes this. I looked at the OSD logs and couldn't find anything unusual. There was no deep-scrub, compaction or other activity in the log. It just seems that the OSD worker thread gets stuck for no apparent reason. Has anyone seen something like that and/or has an idea how to address it? Thanks for any input! Here the blocked OP: "description": "osd_op(client.49504448.0:25390069 2.6fs0 2:f6036592:::rbd_data.1.6ed7996b8b4567.0000000000000379:head [stat,write 1953792~81920] snapc 4f7ce=[4f7ce,4f4fc,4f225,4ef4c,4ec84,4e9b5,4e6d1,4e3fb,4e128,4db85,4c7cb,4b4a8,4a0b2] ondisk+write+known_if_redirected e732656)", "initiated_at": "2022-02-10 10:51:19.328624", "age": 1894.887633, "duration": 88.017855, "type_data": { "flag_point": "commit sent; apply or cleanup", "client_info": { "client": "client.49504448", "client_addr": "192.168.48.17:0/1008950287", "tid": 25390069 }, "events": [ { "time": "2022-02-10 10:51:19.328624", "event": "initiated" }, { "time": "2022-02-10 10:51:19.328624", "event": "header_read" }, { "time": "2022-02-10 10:51:19.328625", "event": "throttled" }, { "time": "2022-02-10 10:51:19.328704", "event": "all_read" }, { "time": "2022-02-10 10:51:19.328719", "event": "dispatched" }, { "time": "2022-02-10 10:51:19.328726", "event": "queued_for_pg" }, { "time": "2022-02-10 10:51:19.329025", "event": "reached_pg" }, { "time": "2022-02-10 10:51:19.329052", "event": "waiting for rw locks" }, { "time": "2022-02-10 10:51:19.332192", "event": "reached_pg" }, { "time": "2022-02-10 10:51:19.332199", "event": "waiting for rw locks" }, { "time": "2022-02-10 10:51:19.521615", "event": "reached_pg" }, { "time": "2022-02-10 10:51:19.521638", "event": "waiting for rw locks" }, { "time": "2022-02-10 10:51:20.117640", "event": "reached_pg" }, { "time": "2022-02-10 10:51:20.117679", "event": "started" }, { "time": "2022-02-10 10:51:20.123190", "event": "sub_op_started" }, { "time": "2022-02-10 10:51:20.124040", "event": "sub_op_committed" }, { "time": "2022-02-10 10:52:47.346397", "event": "commit_sent" }, { "time": "2022-02-10 10:52:47.346480", "event": "done" } ] ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx