AIO requests may be disordered by Qemu-kvm iothread with disk cache=writethrough, Bug or Feature?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear KVM Developers: 
    I am Xiang Song from UCloud company. We currently encounter a weird phenomenon about Qemu-KVM IOthread. 
    We recently try to use Linux AIO from guest OS and find that the IOthread mechanism of Qemu-KVM will reorder I/O requests from guest OS 
even when the AIO write requests are issued from a single thread in order. This does not happen on the host OS however.
    We are not sure whether this is a feature of Qemu-KVM IOthread mechanism or a Bug.
 
The testbd is as following: (the guest disk device cache is configured to writethrough.)
CPU: Intel(R) Xeon(R) CPU E5-2650
QEMU version: 1.5.3
Host/Guest Kernel:  Both Linux 4.1.8 & Linux 2.6.32, OS type CentOS 6.5
Simplified Guest OS qemu cmd:  
/usr/libexec/qemu-kvm -machine rhel6.3.0,accel=kvm,usb=off -cpu kvm64 -smp 8,sockets=8,cores=1,threads=1 
-drive file=/var/lib/libvirt/images/song-disk.img,if=none,id=drive-virtio-disk0,format=qcow2,serial=UCLOUD_DISK_VDA,cache=writethrough 
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:22:d5:52,bus=pci.0,addr=0x4

The test code triggerring this phenomenon work as following: it use linux aio API to issue concurrent async write requests to a file. During exection it will 
continuously write data into target test file. There are total 'X' jobs, and each job is assigned a job id JOB_ID which starts from 0. Each job will write 16 * 512
Byte data into the target file at offset =  JOB_ID * 512. (the data is repeated uint64_t  JOB_ID). 
    There is only one thread handling 'X' jobs one by one through Linux AIO (io_submit) cmd. When handling jobs, it will continuously 
issuing AIO requests without waiting for AIO Callbacks. When it finishes, the file should look like:
         [0....0][1...1][2...2][3...3]...[X-1...X-1]
    Then we use a check program to test the resulting file, it can continuously read the first 8 byte (uint64_t) of each sector and print it out. In normal cases,
 it's output is like:
          0 1 2 3 .... X-1

Exec  output: (Set X=32)
In our guest OS, the output is abnormal: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 18 18 18 18 18 24 25 26 27 28 29 30 31. 
    It can be seen that job20~job24 are overwrited by job19.
In our host OS, the output is as expected, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31.


I can provide the example code if needed.

Best regards, song

2015-10-08


charlie.song 
  
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux