Re: About xfstests generic/361

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 29, 2019 at 08:29:38AM +0800, Ian Kent wrote:
> On Mon, 2019-10-28 at 16:34 -0700, Darrick J. Wong wrote:
> > On Mon, Oct 28, 2019 at 05:17:05PM +0800, Ian Kent wrote:
> > > Hi Darrick,
> > > 
> > > Unfortunately I'm having a bit of trouble with my USB keyboard
> > > and random key repeats, I lost several important messages this
> > > morning due to it.
> > > 
> > > Your report of the xfstests generic/361 problem was one of them
> > > (as was Christoph's mail about the mount code location, I'll post
> > > on that a bit later). So I'm going to have to refer to the posts
> > > and hope that I can supply enough context to avoid confusion.
> > > 
> > > Sorry about this.
> > > 
> > > Anyway, you posted:
> > > 
> > > "Dunno what's up with this particular patch, but I see regressions
> > > on
> > > generic/361 (and similar asserts on a few others).  The patches
> > > leading
> > > up to this patch do not generate this error."
> > > 
> > > I've reverted back to a point more or less before moving the mount
> > > and super block handling code around and tried to reproduce the
> > > problem
> > > on my test VM and I din't see the problem.
> > > 
> > > Is there anything I need to do when running the test, other have
> > > SCRATCH_MNT and SCRATCH_DEV defined in the local config, and the
> > > mount point, and the device existing?
> > 
> > Um... here's the kernel branch that I used:
> > 
> > https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux.git/log/?h=mount-api-crash
> 
> Ok, I'll see what I can do with that.
> 
> > 
> > Along with:
> > 
> > MKFS_OPTIONS -- -m crc=0
> 
> Right.
> 
> > MOUNT_OPTIONS -- -o usrquota,grpquota
> 
> It looked like generic/361 used only the SCRATCH_DEV so I thought
> that meant making a file system and mounting it within the test.

Yes.  MOUNT_OPTIONS are used to mount the scratch device (and in my case
the test device too).

> > and both TEST_DEV and SCRATCH_DEV pointed at boring scsi disks.
> 
> My VM disks are VirtIO (file based) virtual disks, so that sounds
> a bit different.
> 
> Unfortunately I can't use raw disks on the NAS I use for VMs and
> I've migrated away from having a desktop machine with a couple of
> disks to help with testing.
> 
> I have other options if I really need to but it's a little bit
> harder to setup and use company lab machines remotely, compared to
> local hardware (requesting additional disks is hard to do), and
> I'm not sure (probably not) if they can/will use raw disks (or
> partitions) either.

Sorry, I meant 'boring SCSI disks' in a VM.

Er let's see what the libvirt config is...

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='unsafe' discard='unmap'/>
      <source file='/run/mtrdisk/a.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Which currently translates to virtio-scsi disks.

> > 
> > > This could have been a problem with the series I posted because
> > > I did have some difficulty resolving some conflicts along the
> > > way and may have made mistakes, hence reverting to earlier patches
> > > (but also keeping the recent small pre-patch changes).
> > 
> > Yeah, I had the same problem too; you might spot check the commits in
> > there just in case /I/ screwed them up.
> 
> I will, yes.
> 
> > 
> > (I would say 'or rebase on for-next' but (a) I don't know how
> > Christoph's mount cleanups intermix with that and (b) let's see if
> > this afternoon's for-next is less broken on s390 than this morning's
> > was
> > <frown>)
> 
> I neglected to mention that my series is now based on the for-next
> branch as I noticed the get_tree_bdev() fix is present so I can drop
> the first patch.
> 
> It seemed to me that the for-next branch is the right place to base
> the series. I expect there will be the odd bump in the road of course
> ...

Heh. Yes. :)

--D

> Ian
> 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux