Re: 5.3-rc1 regression with XFS log recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 19, 2019 at 06:40:12AM +0200, hch@xxxxxx wrote:
> On Mon, Aug 19, 2019 at 06:29:05AM +0200, hch@xxxxxx wrote:
> > On Mon, Aug 19, 2019 at 02:22:59PM +1000, Dave Chinner wrote:
> > > That implies a kmalloc heap issue.
> > > 
> > > Oh, is memory poisoning or something that modifies the alignment of
> > > slabs turned on?
> > > 
> > > i.e. 4k/8k allocations from the kmalloc heap slabs might not be
> > > appropriately aligned for IO, similar to the problems we have with
> > > the xen blk driver?
> > 
> > That is what I suspect, and as you can see in the attached config I
> > usually run with slab debuggig on.
> 
> Yep, looks like an unaligned allocation:
> 
> root@testvm:~# mount /dev/pmem1 /mnt/
> [   62.346660] XFS (pmem1): Mounting V5 Filesystem
> [   62.347960] unaligned allocation, offset = 680
> [   62.349019] unaligned allocation, offset = 680
> [   62.349872] unaligned allocation, offset = 680
> [   62.350703] XFS (pmem1): totally zeroed log
> [   62.351443] unaligned allocation, offset = 680
> [   62.452203] unaligned allocation, offset = 344
> [   62.528964] XFS: Assertion failed: head_blk != tail_blk, file:
> fs/xfs/xfs_lo6
> [   62.529879] ------------[ cut here ]------------
> [   62.530334] kernel BUG at fs/xfs/xfs_message.c:102!
> [   62.530824] invalid opcode: 0000 [#1] SMP PTI
> 
> With the following debug patch.  Based on that I think I'll just
> formally submit the vmalloc switch as we're at -rc5, and then we
> can restart the unaligned slub allocation drama..

This still doesn't make sense to me, because the pmem and brd code
have no aligment limitations in their make_request code - they can
handle byte adressing and should not have any problem at all with
8 byte aligned memory in bios.

Digging a little furhter, I note that both brd and pmem use
identical mechanisms to marshall data in and out of bios, so they
are likely to have the same issue.

So, brd_make_request() does:

        bio_for_each_segment(bvec, bio, iter) {
                unsigned int len = bvec.bv_len;
                int err;

                err = brd_do_bvec(brd, bvec.bv_page, len, bvec.bv_offset,
                                  bio_op(bio), sector);
                if (err)
                        goto io_error;
                sector += len >> SECTOR_SHIFT;
        }

So, the code behind bio_for_each_segment() splits multi-page bvecs
into individual pages, which are passed to brd_do_bvec(). An
unaligned 4kB io traces out as:

 [  121.295550] p,o,l,s 00000000a77f0146,768,3328,0x7d0048
 [  121.297635] p,o,l,s 000000006ceca91e,0,768,0x7d004e

i.e. page		offset	len	sector
00000000a77f0146	768	3328	0x7d0048
000000006ceca91e	0	768	0x7d004e

You should be able to guess what the problems are from this.

Both pmem and brd are _sector_ based. We've done a partial sector
copy on the first bvec, then the second bvec has started the copy
from the wrong offset into the sector we've done a partial copy
from.

IOWs, no error is reported when the bvec buffer isn't sector
aligned, no error is reported when the length of data to copy was
not a multiple of sector size, and no error was reported when we
copied the same partial sector twice.

There's nothing quite like being repeatedly bitten by the same
misalignment bug because there's no validation in the infrastructure
that could catch it immediately and throw a useful warning/error
message.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux