On 01/12/2015 10:53 AM, Keith Busch wrote:
On Mon, 12 Jan 2015, Jens Axboe wrote:
On 01/12/2015 10:04 AM, Bart Van Assche wrote:
The tag state after having stopped multipathd (systemctl stop
multipathd) is as follows:
# dmsetup table /dev/dm-0
0 256000 multipath 3 queue_if_no_path pg_init_retries 50 0 1 1
service-time 0 2 2 8:48 1 1 8:32 1 1
# ls -l /dev/sd[cd]
brw-rw---- 1 root disk 8, 32 Jan 12 17:47 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jan 12 17:47 /dev/sdd
# for d in sdc sdd dm-0; do echo ==== $d; (cd /sys/block/$d/mq &&
find|cut -c3-|grep active|xargs grep -aH ''); done
==== sdc
0/active:10
1/active:14
2/active:7
3/active:13
4/active:6
5/active:10
==== sdd
0/active:17
1/active:8
2/active:9
3/active:13
4/active:5
5/active:10
==== dm-0
-bash: cd: /sys/block/dm-0/mq: No such file or directory
OK, so it's definitely leaking, but only partially - the requests are
freed, yet the active count isn't decremented. I wonder if we're
losing that flag along the way. It's numbered high enough that a cast
to int will drop it, perhaps the cmd_flags is being copied/passed
around as an int and not the appropriate u64? We've had bugs like that
before.
Is the nr_active count correct prior to starting the mkfs test? Trying
to see if someone is calling "blk_mq_alloc_tag_set()" twice on the same
set. It might be good to add a WARN if this is detected anyway.
That might be a good debug aid, I agree. But the above doesn't look like
it's corrupted. If you add the values, you get 60 and 62 for the two
cases, which seems to indicate that we did bump the values correctly,
but for some reason we never did the decrement on completion. Hence we
stabilize around the queue depth of the device, which will be 62 +/- a
bit due to the sharing.
I'm not familiar with how rq based dm works. We clone the original
request (which has the RQ_MQ_INFLIGHT flag set), then we issue the
clone(s) to the underlying device(s)? And when that completes, we
complete the original? That would work fine with the flag on the
original request. Maybe I'm missing something, and I'll let more
knowledgeable people discuss that.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel