Will these tests succeed if t/zbd/test-zbd-support script is run against an SMR HDD? Since zoned HDDs don't support Zone Append, I would expect the i/o to fail. I think you need to check if this device is an NVMe drive and expect the i/o failure in the tests below if this is not the case. More inline... > -----Original Message----- > From: fio-owner@xxxxxxxxxxxxxxx <fio-owner@xxxxxxxxxxxxxxx> On Behalf > Of Krishna Kanth Reddy > Sent: Thursday, June 25, 2020 1:39 PM > To: axboe@xxxxxxxxx > Cc: fio@xxxxxxxxxxxxxxx; Krishna Kanth Reddy <krish.reddy@xxxxxxxxxxx>; > Ankit Kumar <ankit.kumar@xxxxxxxxxxx> > Subject: [PATCH 4/4] t/zbd: Add support to verify Zone Append command > with libaio, io_uring IO engine tests > > Modify the test-zbd-support script to verify the Zone Append command > for NVMe Zoned Namespaces (ZNS) defined in NVM Express TP4053. > Added a new FIO option zone_append. > When zone_append option is enabled, the existing write path will > send Zone Append command with LBA offset as start of the Zone. > > Signed-off-by: Ankit Kumar <ankit.kumar@xxxxxxxxxxx> > --- > t/zbd/test-zbd-support | 48 > ++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 48 insertions(+) > > diff --git a/t/zbd/test-zbd-support b/t/zbd/test-zbd-support > index 4001be3..ddade22 100755 > --- a/t/zbd/test-zbd-support > +++ b/t/zbd/test-zbd-support > @@ -801,6 +801,54 @@ test48() { > >> "${logfile}.${test_number}" 2>&1 || return $? > } > > +# Zone append to sequential zones, libaio, 1 job, queue depth 1 > +test49() { > + local i size > + > + size=$((4 * zone_size)) > + run_fio_on_seq --ioengine=libaio --iodepth=1 --rw=write -- > zone_append=1 \ > + --bs="$(max $((zone_size / 64)) "$logical_block_size")"\ > + --do_verify=1 --verify=md5 \ > + >>"${logfile}.${test_number}" 2>&1 || return $? > + check_written $size || return $? > + check_read $size || return $? > +} > + > +# Random zone append to sequential zones, libaio, 8 jobs, queue depth 64 > per job > +test50() { > + local size > + > + size=$((4 * zone_size)) > + run_fio_on_seq --ioengine=libaio --iodepth=64 --rw=randwrite --bs=4K \ > + --group_reporting=1 --numjobs=8 --zone_append=1 \ > + >> "${logfile}.${test_number}" 2>&1 || return $? > + check_written $((size * 8)) || return $? > +} > + > +# Zone append to sequential zones, io_uring, 1 job, queue depth 1 > +test51() { > + local i size > + > + size=$((4 * zone_size)) > + run_fio_on_seq --ioengine=io_uring --iodepth=1 --rw=write -- > zone_append=1 \ > + --bs="$(max $((zone_size / 64)) "$logical_block_size")"\ > + --do_verify=1 --verify=md5 \ > + >>"${logfile}.${test_number}" 2>&1 || return $? > + check_written $size || return $? > + check_read $size || return $? > +} > + > +# Random zone append to sequential zones, io_uring, 8 jobs, queue depth > 64 per job > +test52() { > + local size > + > + size=$((4 * zone_size)) Maybe try some different size? It is the same in all tests. > + run_fio_on_seq --ioengine=io_uring --iodepth=64 --rw=randwrite -- > bs=4K \ All tests do 4K i/o, but maybe try to run with a different block size? It could be a good idea to add a test that will write with bs=ZASL(or MDTS). Yet another test issuing i/o with bs exceeding the maximum i/o size would be very useful. > + --group_reporting=1 --numjobs=8 --zone_append=1 \ > + >> "${logfile}.${test_number}" 2>&1 || return $? > + check_written $((size * 8)) || return $? > +} > + > tests=() > dynamic_analyzer=() > reset_all_zones= > -- > 2.7.4