Re: [PATCH blktests 0/5] Fix failures found with zoned block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Omar,

On Wed, 2019-02-20 at 10:22 -0800, Omar Sandoval wrote:
> On Wed, Feb 20, 2019 at 05:12:26PM +0900, Shin'ichiro Kawasaki wrote:
> > This patch series addresses two incorrect test failures found in the
> > zbd test
> > group. Two other problems with the check script and the common rc
> > script are
> > also fixed.
> > 
> > More specifically,
> > * Patch 1 addresses an incorrect failure of block/024 caused by
> > shorter write
> >   I/O time than expected on very fast systems with a low overhead.
> > * Patch 2 fixes test zbd/004 which can fail when a disk closes an
> > implicitly
> >   open zone after completion of a write command. To avoid this
> > failure, the
> >   closed zone condition is added as an allowed condition.
> > * Patch 3 to 5 fix problems to access a block device sysfs
> > attributes if the
> >   target device is a partition.
> > 
> > Of note is that test block/004 still fails with a target device that
> > is a
> > partition of a zoned block device. The failure cause is due to an
> > incorrect
> > access to sysfs disk attributes by fio. A patch to fix this issue
> > was sent to
> > fio mailing list.
> 
> Thanks, I merged 1 and 2. I'm not necessarily opposed to the rest, but
> I'm curious what your use case is for testing on partitions?

Paraphrasing my answer to Bart who had a similar question on the fio
mailing list.

For host-managed disks, partitioning a zoned block device is not a very
compeling use case, nor is it commonly in use in the field as far as I
know. Chunking a host-managed disk into smaller drives with dm-linear is
likely a better option. There is one use case I have seen in the field
though where a partition was created over the conventional zones space
of a disk to create an essentially normal (i.e. randomly writable) disk
to put an ext4 file system on top for a metadata DB to manage data
stored on the remaining sequential zones of the disk. Such choice allows
to simplify the overall system design by enabling the use of proven
components (ext4, DB) rather than writing or porting everything to fit a
pure zoned block device model.

The main use case for using partitions is with host-aware disk models as
these can be written randomly anywhere and so are in essence "normal"
disks, more so considering the fact that these drives SCSI device type
is "0x0", equivalent to regular disk devices. As such, partitioning
host-aware disks in the same manner as regular disks generally are is a
valid use case.

The intent of these fixes is to allow for running tests against
partitions to check the kernel handling of partition sector remapping
for zoned block disks. Since the kernel supports (I mean does not
prevent) partitions of zoned disks, extending test coverage to that code
is I think useful. Specific test cases or groups in blktests are not
needed to specifically test that, we only need the ability to specify a
partition device file in blktests config (TEST_DEVS variable). Tests
runs on top of zoned dm-linear devices have the same intent (sector
remapping verification) and that works as is now. Getting the same to
run for partitions would close a gap.

Thank you for your review.

Best regards.

-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux