Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> From: "Laurence Oberman" <loberman@xxxxxxxxxx>
> To: "Jan Kara" <jack@xxxxxxx>
> Cc: "Johannes Weiner" <hannes@xxxxxxxxxxx>, "Hugh Dickins" <hughd@xxxxxxxxxx>, "Linus Torvalds"
> <torvalds@xxxxxxxxxxxxxxxxxxxx>, "Dave Chinner" <david@xxxxxxxxxxxxx>, "Chris Leech" <cleech@xxxxxxxxxx>, "Linux
> Kernel Mailing List" <linux-kernel@xxxxxxxxxxxxxxx>, "Lee Duncan" <lduncan@xxxxxxxx>, open-iscsi@xxxxxxxxxxxxxxxx,
> "Linux SCSI List" <linux-scsi@xxxxxxxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, "Christoph Hellwig" <hch@xxxxxx>,
> "Andrea Arcangeli" <aarcange@xxxxxxxxxx>
> Sent: Wednesday, January 4, 2017 10:26:09 AM
> Subject: Re: [4.10, panic, regression] iscsi: null pointer deref at iscsi_tcp_segment_done+0x20d/0x2e0
> 
> 
> 
> ----- Original Message -----
> > From: "Jan Kara" <jack@xxxxxxx>
> > To: "Johannes Weiner" <hannes@xxxxxxxxxxx>
> > Cc: "Hugh Dickins" <hughd@xxxxxxxxxx>, "Linus Torvalds"
> > <torvalds@xxxxxxxxxxxxxxxxxxxx>, "Dave Chinner"
> > <david@xxxxxxxxxxxxx>, "Chris Leech" <cleech@xxxxxxxxxx>, "Linux Kernel
> > Mailing List"
> > <linux-kernel@xxxxxxxxxxxxxxx>, "Lee Duncan" <lduncan@xxxxxxxx>,
> > open-iscsi@xxxxxxxxxxxxxxxx, "Linux SCSI List"
> > <linux-scsi@xxxxxxxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, "Christoph
> > Hellwig" <hch@xxxxxx>, "Jan Kara"
> > <jack@xxxxxxx>, "Andrea Arcangeli" <aarcange@xxxxxxxxxx>
> > Sent: Tuesday, January 3, 2017 7:28:25 AM
> > Subject: Re: [4.10, panic, regression] iscsi: null pointer deref at
> > iscsi_tcp_segment_done+0x20d/0x2e0
> > 
> > On Mon 02-01-17 16:11:36, Johannes Weiner wrote:
> > > On Fri, Dec 23, 2016 at 03:33:29AM -0500, Johannes Weiner wrote:
> > > > On Fri, Dec 23, 2016 at 02:32:41AM -0500, Johannes Weiner wrote:
> > > > > On Thu, Dec 22, 2016 at 12:22:27PM -0800, Hugh Dickins wrote:
> > > > > > On Wed, 21 Dec 2016, Linus Torvalds wrote:
> > > > > > > On Wed, Dec 21, 2016 at 9:13 PM, Dave Chinner
> > > > > > > <david@xxxxxxxxxxxxx>
> > > > > > > wrote:
> > > > > > > > I unmounted the fs, mkfs'd it again, ran the
> > > > > > > > workload again and about a minute in this fired:
> > > > > > > >
> > > > > > > > [628867.607417] ------------[ cut here ]------------
> > > > > > > > [628867.608603] WARNING: CPU: 2 PID: 16925 at
> > > > > > > > mm/workingset.c:461
> > > > > > > > shadow_lru_isolate+0x171/0x220
> > > > > > > 
> > > > > > > Well, part of the changes during the merge window were the shadow
> > > > > > > entry tracking changes that came in through Andrew's tree. Adding
> > > > > > > Johannes Weiner to the participants.
> > > > > > > 
> > > > > > > > Now, this workload does not touch the page cache at all - it's
> > > > > > > > entirely an XFS metadata workload, so it should not really be
> > > > > > > > affecting the working set code.
> > > > > > > 
> > > > > > > Well, I suspect that anything that creates memory pressure will
> > > > > > > end
> > > > > > > up
> > > > > > > triggering the working set code, so ..
> > > > > > > 
> > > > > > > That said, obviously memory corruption could be involved and
> > > > > > > result
> > > > > > > in
> > > > > > > random issues too, but I wouldn't really expect that in this
> > > > > > > code.
> > > > > > > 
> > > > > > > It would probably be really useful to get more data points - is
> > > > > > > the
> > > > > > > problem reliably in this area, or is it going to be random and
> > > > > > > all
> > > > > > > over the place.
> > > > > > 
> > > > > > Data point: kswapd got WARNING on mm/workingset.c:457 in
> > > > > > shadow_lru_isolate,
> > > > > > soon followed by NULL pointer deref in list_lru_isolate, one time
> > > > > > when
> > > > > > I tried out Sunday's git tree.  Not seen since, I haven't had time
> > > > > > to
> > > > > > investigate, just set it aside as something to worry about if it
> > > > > > happens
> > > > > > again.  But it looks like shadow_lru_isolate() has issues beyond
> > > > > > Dave's
> > > > > > case (I've no XFS and no iscsi), suspect unrelated to his other
> > > > > > problems.
> > > > > 
> > > > > This seems consistent with what Dave observed: we encounter regular
> > > > > pages in radix tree nodes on the shadow LRU that should only contain
> > > > > nodes full of exceptional shadow entries. It could be an issue in the
> > > > > new slot replacement code and the node tracking callback.
> > > > 
> > > > Both encounters seem to indicate use-after-free. Dave's node didn't
> > > > warn about an unexpected node->count / node->exceptional state, but
> > > > had entries that were inconsistent with that. Hugh got the counter
> > > > warning but crashed on a list_head that's not NULLed in a live node.
> > > > 
> > > > workingset_update_node() should be called on page cache radix tree
> > > > leaf nodes that go empty. I must be missing an update_node callback
> > > > where a leaf node gets freed somewhere.
> > > 
> > > Sorry for dropping silent on this. I'm traveling over the holidays
> > > with sporadic access to my emails and no access to real equipment.
> > > 
> > > The times I managed to sneak away to look at the code didn't turn up
> > > anything useful yet.
> > > 
> > > Andrea encountered the warning as well and I gave him a debugging
> > > patch (attached below), but he hasn't been able to reproduce this
> > > condition. I've personally never seen the warning trigger, even though
> > > the patches have been running on my main development machine for quite
> > > a while now. Albeit against an older base; I've updated to Linus's
> > > master branch now in case it's an interaction with other new code.
> > > 
> > > If anybody manages to reproduce this, that would be helpful. Any extra
> > > eyes on this would be much appreciated too until I'm back at my desk.
> > 
> > I was looking into this but I didn't find a way how we could possibly leave
> > radix tree node on LRU. So your debug patch looks like a good way forward.
> > 
> > 								Honza
> > --
> > Jan Kara <jack@xxxxxxxx>
> > SUSE Labs, CR
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> 
> Running on 4.10.0-rc2+ and days of ISCSI served XFS load have not reproduced
> this yet for me with an 4-core (hyper 8) system.
> Using ISCSI over Ethernet controller: Intel Corporation Ethernet Connection
> I217-LM
> 
> I will now try with a full offload ISCSI HBA and see if if  have any luck.
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

I will leave this running now against a full offload connection.
If it triggers will add the debug and reproduce it again and share with the list here.

Kernel 4.10.0-rc2+ on an x86_64

#!/bin/bash
while true
do
	mkfs -t xfs -f /dev/sdav
        mount /dev/sdav /mnt
        cp -Rp /usr /mnt
        umount /mnt
        sleep 10
done

--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux