Re: [PATCH blktests 0/3] Add NVMeOF multipath tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 27, 2018 at 04:26:42PM -0700, Bart Van Assche wrote:
> On Tue, 2018-09-18 at 17:18 -0700, Omar Sandoval wrote:
> > On Tue, Sep 18, 2018 at 05:02:47PM -0700, Bart Van Assche wrote:
> > > On 9/18/18 4:24 PM, Omar Sandoval wrote:
> > > > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote:
> > > > > Can you have a look at the updated master branch of
> > > > > https://github.com/bvanassche/blktests? That code should no longer fail if
> > > > > unloading the nvme kernel module fails. Please note that you will need
> > > > > kernel v4.18 to test these scripts - a KASAN complaint appears if I run
> > > > > these tests against kernel v4.19-rc4.
> > > > 
> > > > Thanks, these pass now. Is it expected that my nvme device gets a
> > > > multipath device configured after running these tests?
> > > > 
> > > > $ lsblk
> > > > NAME     MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
> > > > vda      254:0    0  16G  0 disk
> > > > └─vda1   254:1    0  16G  0 part  /
> > > > vdb      254:16   0   8G  0 disk
> > > > vdc      254:32   0   8G  0 disk
> > > > vdd      254:48   0   8G  0 disk
> > > > nvme0n1  259:0    0   8G  0 disk
> > > > └─mpatha 253:0    0   8G  0 mpath
> > > 
> > > No, all multipath devices that were created during a test should be removed
> > > before that test finishes. I will look into this.
> > > 
> > > > Also, can you please fix:
> > > > 
> > > > 	_have_kernel_option NVME_MULTIPATH && exit 1
> > > > 
> > > > to not exit on failure? It should use SKIP_REASON and return 1. You
> > > > might need to add something like _dont_have_kernel_option to properly
> > > > handle the case where the config is not found.
> > > 
> > > OK, I will change this.
> > > 
> > > > Side note which isn't a blocker for merging is that there's a lot of
> > > > duplicated code between these helpers and the srp helpers. How hard
> > > > would it be to refactor that?
> > > 
> > > Are you perhaps referring to the code that is shared between the
> > > tests/srp/rc tests/nvmeof-mp/rc shell scripts?
> > 
> > Yes, those.
> > 
> > > The hardest part is probably
> > > to chose a location where to store these functions. Should I create a file
> > > with common code under common/, under tests/srp/, under tests/nvmeof-mp/ or
> > > perhaps somewhere else?
> > 
> > Just put it under common.
> 
> Hi Omar,
> 
> All feedback mentioned above has been addressed. The following pull request has
> been updated: https://github.com/osandov/blktests/pull/33. Please let me know
> if you want me to post these patches on the linux-block mailing list.
> 
> Note: neither the upstream kernel v4.18 nor v4.19-rc4 are stable enough to pass
> all nvmeof-mp tests if kernel debugging options like KASAN are enabled.
> Additionally, the NVMe device_add_disk() race condition often causes multipathd
> to refuse to consider /dev/nvme... devices. The output on my test setup is as
> follows (all tests pass):
> 
> # ./check -q nvmeof-mp
> nvmeof-mp/001 (Log in and log out)                           [passed]
>     runtime  1.528s  ...  1.909s
> nvmeof-mp/002 (File I/O on top of multipath concurrently with logout and login (mq)) [
> passed]time  38.968s  ...
>     runtime  38.968s  ...  38.571s
> nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on-
> nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and login (sq-on-
> mq)) [passed]38.632s  ...
>     runtime  38.632s  ...  37.529s
> nvmeof-mp/005 (Direct I/O with large transfer sizes and bs=4M) [passed]
>     runtime  13.382s  ...  13.684s
> nvmeof-mp/006 (Direct I/O with large transfer sizes and bs=8M) [passed]
>     runtime  13.511s  ...  13.480s
> nvmeof-mp/009 (Buffered I/O with large transfer sizes and bs=4M) [passed]
>     runtime  13.665s  ...  13.763s
> nvmeof-mp/010 (Buffered I/O with large transfer sizes and bs=8M) [passed]
>     runtime  13.442s  ...  13.900s
> nvmeof-mp/011 (Block I/O on top of multipath concurrently with logout and login) [pass
> ed] runtime  37.988s  ...
>     runtime  37.988s  ...  37.945s
> nvmeof-mp/012 (dm-mpath on top of multiple I/O schedulers)   [passed]
>     runtime  21.659s  ...  21.733s

Thanks, Bart, merged.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux