Re: [PATCH blktests 0/3] Add NVMeOF multipath tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote:
> On 8/23/18 5:21 PM, Omar Sandoval wrote:
> > On Thu, Aug 23, 2018 at 01:53:33AM +0000, Bart Van Assche wrote:
> > > On Tue, 2018-08-21 at 08:46 +0200, Johannes Thumshirn wrote:
> > > > On Mon, Aug 20, 2018 at 03:46:45PM +0000, Bart Van Assche wrote:
> > > > > Moving these tests into the nvme directory is possible but will make it
> > > > > harder to run the NVMeOF multipath tests separately. Are you fine with this?
> > > > 
> > > > Both way's have it's up and downsides, I agree.
> > > > 
> > > > Having two distinct groups requires to run './check nvme nvmeof-mp' to
> > > > run full coverage with nvme.
> > > > 
> > > > Having it all in one group would require to run './check nvme 18 19 20
> > > > 21 22 23 24 ...' to get only the dm-mpath ones.
> > > > 
> > > > Honestly I hate both but your's (the two distinct groups) is probably
> > > > easier to handle in the end, I have to admit.
> > > 
> > > Omar, do you have a preference for one of the two aforementioned approaches?
> > 
> > Let's keep it in a separate category, since lots of people running nvme
> > tests probably aren't interested in testing multipath.
> > 
> > A bunch of the tests failed with
> > 
> > modprobe: FATAL: Module nvme is in use.
> > 
> > Maybe related to my test VM having an nvme device?
> 
> Hello Omar,
> 
> Can you have a look at the updated master branch of
> https://github.com/bvanassche/blktests? That code should no longer fail if
> unloading the nvme kernel module fails. Please note that you will need
> kernel v4.18 to test these scripts - a KASAN complaint appears if I run
> these tests against kernel v4.19-rc4.

Thanks, these pass now. Is it expected that my nvme device gets a
multipath device configured after running these tests?

$ lsblk
NAME     MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
vda      254:0    0  16G  0 disk
└─vda1   254:1    0  16G  0 part  /
vdb      254:16   0   8G  0 disk
vdc      254:32   0   8G  0 disk
vdd      254:48   0   8G  0 disk
nvme0n1  259:0    0   8G  0 disk
└─mpatha 253:0    0   8G  0 mpath

Also, can you please fix:

	_have_kernel_option NVME_MULTIPATH && exit 1

to not exit on failure? It should use SKIP_REASON and return 1. You
might need to add something like _dont_have_kernel_option to properly
handle the case where the config is not found.

Side note which isn't a blocker for merging is that there's a lot of
duplicated code between these helpers and the srp helpers. How hard
would it be to refactor that?



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux