Re: [PATCH v2] virtio_blk: implement init_hctx MQ operation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Max,

On 23.09.2024 00:47, Max Gurtovoy wrote:
>
> On 17/09/2024 17:09, Marek Szyprowski wrote:
>> On 17.09.2024 00:06, Max Gurtovoy wrote:
>>> On 12/09/2024 9:46, Marek Szyprowski wrote:
>>>> Dear All,
>>>>
>>>> On 08.08.2024 00:41, Max Gurtovoy wrote:
>>>>> Set the driver data of the hardware context (hctx) to point 
>>>>> directly to
>>>>> the virtio block queue. This cleanup improves code readability and
>>>>> reduces the number of dereferences in the fast path.
>>>>>
>>>>> Reviewed-by: Stefan Hajnoczi <stefanha@xxxxxxxxxx>
>>>>> Signed-off-by: Max Gurtovoy <mgurtovoy@xxxxxxxxxx>
>>>>> ---
>>>>>     drivers/block/virtio_blk.c | 42
>>>>> ++++++++++++++++++++------------------
>>>>>     1 file changed, 22 insertions(+), 20 deletions(-)
>>>> This patch landed in recent linux-next as commit 8d04556131c1
>>>> ("virtio_blk: implement init_hctx MQ operation"). In my tests I found
>>>> that it introduces a regression in system suspend/resume operation. 
>>>> From
>>>> time to time system crashes during suspend/resume cycle. Reverting 
>>>> this
>>>> patch on top of next-20240911 fixes this problem.
>>> Could you please provide a detailed explanation of the system
>>> suspend/resume operation and the specific testing methodology employed?
>> In my tests I just call the 'rtcwake -s10 -mmem' command many times in a
>> loop. I use standard Debian image under QEMU/ARM64. Nothing really 
>> special.
>
> I run this test on my bare metal x86 server in a loop with fio in the 
> background.
>
> The test passed.


Maybe QEMU is a bit slower and exposes some kind of race caused by this 
change.


>
> Can you please re-test with the linux/master branch with applying this 
> patch on top ?


This issue is fully reproducible with vanilla v6.11 and v6.10 from Linus 
and $subject patch applied on top of it.


I've even checked it with x86_64 QEMU and first random Debian 
preinstalled image I've found.


Here is a detailed setup if You like to check it by yourself (tested on 
Ubuntu 22.04 LTS x86_64 host):


1. download x86_64 preinstalled Debian image:

# wget 
https://dietpi.com/downloads/images/DietPi_NativePC-BIOS-x86_64-Bookworm.img.xz
# xz -d DietPi_NativePC-BIOS-x86_64-Bookworm.img.xz


2. build kernel:

# make x86_64_defconfig
# make -j12


3. run QEMU:

# sudo qemu-system-x86_64 -enable-kvm     \
     -kernel PATH_TO_YOUR_KERNEL_DIR/arch/x86/boot/bzImage \
     -append "console=ttyS0 root=/dev/vda1 rootwait noapic tsc=unstable 
init=/bin/sh" \
     -smp 2 -m 2048     \
     -drive 
file=DietPi_NativePC-BIOS-x86_64-Bookworm.img,format=raw,if=virtio  \
     -netdev user,id=net0 -device virtio-net,netdev=net0        \
     -serial mon:stdio -nographic


4. let it boot, then type (copy&paste line by line) in the init shell:

# mount proc /proc -t proc
# mount sys /sys -t sysfs
# n=10; for i in `seq 1 $n`; do echo Test $i of $n; rtcwake -s10 -mmem; 
date; echo Test $i done; done


5. Use 'Ctrl-a' then 'x' to exit QEMU console.


 > ...


Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux