Kernel NBD client waits on wrong cookie, aborts connection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

the other day I was running some benchmarks to compare different QEMU
block exports, and one of the scenarios I was interested in was
exporting NBD from qemu-storage-daemon over a unix socket and attaching
it as a block device using the kernel NBD client. I would then run a VM
on top of it and fio inside of it.

Unfortunately, I couldn't get any numbers because the connection always
aborted with messages like "Double reply on req ..." or "Unexpected
reply ..." in the host kernel log.

Yesterday I found some time to have a closer look why this is happening,
and I think I have a rough understanding of what's going on now. Look at
these trace events:

        qemu-img-51025   [005] ..... 19503.285423: nbd_header_sent: nbd transport event: request 000000002df03708, handle 0x0000150c0000005a
[...]
        qemu-img-51025   [008] ..... 19503.285500: nbd_payload_sent: nbd transport event: request 000000002df03708, handle 0x0000150c0000005d
[...]
   kworker/u49:1-47350   [004] ..... 19503.285514: nbd_header_received: nbd transport event: request 00000000b79e7443, handle 0x0000150c0000005a

This is the same request, but the handle has changed between
nbd_header_sent and nbd_payload_sent! I think this means that we hit one
of the cases where the request is requeued, and then the next time it
is executed with a different blk-mq tag, which is something the nbd
driver doesn't seem to expect.

Of course, since the cookie is transmitted in the header, the server
replies with the original handle that contains the tag from the first
call, while the kernel is only waiting for a handle with the new tag and
is confused by the server response.

I'm not sure yet which of the following options should be considered the
real problem here, so I'm only describing the situation without trying
to provide a patch:

1. Is it that blk-mq should always re-run the request with the same tag?
   I don't expect so, though in practice I was surprised to see that it
   happens quite often after nbd requeues a request that it actually
   does end up with the same cookie again.

2. Is it that nbd should use cookies that don't depend on the tag?
   Maybe, but then we lose an easy way to identify the request from the
   server response.

3. Is it that it should never requeue requests after already starting to
   send data for them? This sounds most likely to me, but also like the
   biggest change to make in nbd.

4. Or something else entirely?

I tested this with the 6.10.12 kernel from Fedora 40, but a quick git
diff on nbd.c doesn't suggest that anything related has changed since
then. This is how I reproduced it for debugging (without a VM):

$ qemu-storage-daemon --blockdev null-co,size=$((16*(1024**3))),node-name=data --nbd-server addr.type=unix,addr.path=/tmp/nbd.sock --export nbd,id=exp0,node-name=data,writable=on
# nbd-client -unix -N data /tmp/nbd.sock /dev/nbd0
# qemu-img bench -f host_device -w -s 4k -c 1000000 -t none -i io_uring /dev/nbd0

I couldn't trigger the problem with TCP or without the io_uring backend
(i.e. using Linux AIO or the thread pool) for 'qemu-img bench'.

Kevin





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux