Re: ublk-qcow2: ublk-qcow2 is available

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

thanks for the notification.I want to note that the official "in kernel
qcow2 (ro)" project was renamed to "xloop" and is now maintained on
Github [1]. So far we are successfully using xloop toimplement our use
case explained in [2].

Seems like we have a technical alternative to get file-format specific
functionality out of the kernel. When I presented the "in kernel qcow2
(ro)" project idea on the mailing list [3], there was a discussion about
whether file formats like qcow2 should be implemented in the kernel or
not? Now, this question should be obsolete.

[1] https://github.com/bwLehrpool/xloop
[2] https://www.spinics.net/lists/linux-block/msg44858.html
[3] https://www.spinics.net/lists/linux-block/msg39538.html

Regards,
Manuel

On 9/30/22 11:24, Ming Lei wrote:
> Hello,
>
> ublk-qcow2 is available now.
>
> So far it provides basic read/write function, and compression and snapshot
> aren't supported yet. The target/backend implementation is completely
> based on io_uring, and share the same io_uring with ublk IO command
> handler, just like what ublk-loop does.
>
> Follows the main motivations of ublk-qcow2:
>
> - building one complicated target from scratch helps libublksrv APIs/functions
>   become mature/stable more quickly, since qcow2 is complicated and needs more
>   requirement from libublksrv compared with other simple ones(loop, null)
>
> - there are several attempts of implementing qcow2 driver in kernel, such as
>   ``qloop`` [2], ``dm-qcow2`` [3] and ``in kernel qcow2(ro)`` [4], so ublk-qcow2
>   might useful be for covering requirement in this field
>
> - performance comparison with qemu-nbd, and it was my 1st thought to evaluate
>   performance of ublk/io_uring backend by writing one ublk-qcow2 since ublksrv
>   is started
>
> - help to abstract common building block or design pattern for writing new ublk
>   target/backend
>
> So far it basically passes xfstest(XFS) test by using ublk-qcow2 block
> device as TEST_DEV, and kernel building workload is verified too. Also
> soft update approach is applied in meta flushing, and meta data
> integrity is guaranteed, 'make test T=qcow2/040' covers this kind of
> test, and only cluster leak is reported during this test.
>
> The performance data looks much better compared with qemu-nbd, see
> details in commit log[1], README[5] and STATUS[6]. And the test covers both
> empty image and pre-allocated image, for example of pre-allocated qcow2
> image(8GB):
>
> - qemu-nbd (make test T=qcow2/002)
> 	randwrite(4k): jobs 1, iops 24605
> 	randread(4k): jobs 1, iops 30938
> 	randrw(4k): jobs 1, iops read 13981 write 14001
> 	rw(512k): jobs 1, iops read 724 write 728
>
> - ublk-qcow2 (make test T=qcow2/022)
> 	randwrite(4k): jobs 1, iops 104481
> 	randread(4k): jobs 1, iops 114937
> 	randrw(4k): jobs 1, iops read 53630 write 53577
> 	rw(512k): jobs 1, iops read 1412 write 1423
>
> Also ublk-qcow2 aligns queue's chunk_sectors limit with qcow2's cluster size,
> which is 64KB at default, this way simplifies backend io handling, but
> it could be increased to 512K or more proper size for improving sequential
> IO perf, just need one coroutine to handle more than one IOs.
>
>
> [1] https://github.com/ming1/ubdsrv/commit/9faabbec3a92ca83ddae92335c66eabbeff654e7
> [2] https://upcommons.upc.edu/bitstream/handle/2099.1/9619/65757.pdf?sequence=1&isAllowed=y
> [3] https://lwn.net/Articles/889429/
> [4] https://lab.ks.uni-freiburg.de/projects/kernel-qcow2/repository
> [5] https://github.com/ming1/ubdsrv/blob/master/qcow2/README.rst
> [6] https://github.com/ming1/ubdsrv/blob/master/qcow2/STATUS.rst
>
> Thanks,
> Ming



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux