Re: [Syzkaller & bisect] There is BUG: unable to handle kernel NULL pointer dereference in xfs_extent_free_diff_items in v6.4-rc3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 23, 2023 at 12:00:29AM +0000, Eric Biggers wrote:
> On Mon, May 22, 2023 at 09:05:25AM -0700, Darrick J. Wong wrote:
> > On Mon, May 22, 2023 at 01:39:27PM +0700, Bagas Sanjaya wrote:
> > > On Mon, May 22, 2023 at 10:07:28AM +0800, Pengfei Xu wrote:
> > > > Hi Darrick,
> > > > 
> > > > Greeting!
> > > > There is BUG: unable to handle kernel NULL pointer dereference in
> > > > xfs_extent_free_diff_items in v6.4-rc3:
> > > > 
> > > > Above issue could be reproduced in v6.4-rc3 and v6.4-rc2 kernel in guest.
> > > > 
> > > > Bisected this issue between v6.4-rc2 and v5.11, found the problem commit is:
> > > > "
> > > > f6b384631e1e xfs: give xfs_extfree_intent its own perag reference
> > > > "
> > > > 
> > > > report0, repro.stat and so on detailed info is link: https://github.com/xupengfe/syzkaller_logs/tree/main/230521_043336_xfs_extent_free_diff_items
> > > > Syzkaller reproduced code: https://github.com/xupengfe/syzkaller_logs/blob/main/230521_043336_xfs_extent_free_diff_items/repro.c
> > > > Syzkaller reproduced prog: https://github.com/xupengfe/syzkaller_logs/blob/main/230521_043336_xfs_extent_free_diff_items/repro.prog
> > > > Kconfig: https://github.com/xupengfe/syzkaller_logs/blob/main/230521_043336_xfs_extent_free_diff_items/kconfig_origin
> > > > Bisect info: https://github.com/xupengfe/syzkaller_logs/blob/main/230521_043336_xfs_extent_free_diff_items/bisect_info.log
> > > > Issue dmesg: https://github.com/xupengfe/syzkaller_logs/blob/main/230521_043336_xfs_extent_free_diff_items/v6.4-rc3_reproduce_dmesg.log
> > > > 
> > > > v6.4-rc3 reproduced info:
> > 
> > Diagnosis and patches welcomed.
> > 
> > Or are we doing the usual syzbot bullshit where you all assume that I'm
> > going to do all the fucking work for you?
> > 
> 
> It looks like Pengfei already took the time to manually bisect this issue to a
> very recent commit authored by you.  Is that not helpful?

No. The bisect is completely meaningless.

The cause of the problem is going to be some piece of corrupted
metadata has got through a verifier check or log recovery and has
resulted in a perag lookup failing. The bisect landed on the commit
where the perag dependency was introduced; whatever is letting
unchecked corrupted metadata throught he verifiers has existed long
before this recent change was made.

I've already spent two hours analysing this report - I've got to the
point where I've isolated the transaction in the trace, I see the
allocation being run as expected, I see all the right things
happening, and then it goes splat after the allocation has committed
and it starts processing defered extent free operations. Neither the
code nor the trace actually tell me anything about the nature of the
failure that has occurred.

At this point, I still don't know where the corrupted metadata is
coming from. That's the next thing I need to look at, and then I
realised that this bug report *doesn't include a pointer to the
corrupted filesystem image that is being mounted*.

IOWs, the bug report is deficient and not complete, and so I'm
forced to spend unnecessary time trying to work out how to extract
the filesystem image from a weird syzkaller report that is basically
just a bunch of undocumented blobs in a github tree.

This is the same sort of shit we've been having to deal rigth from
teh start with syzkaller. It doesn't matter that syzbot might have
improved it's reporting a bit these days, we still have to deal with
this sort of poor reporting from all the private syzkaller bot crank
handles that are being turned by people who know little more than
how to turn a crank handle.

To make matters worse, this is a v4 filesystem which has known
unfixable issues when handling corrupted filesystems in both log
replay and in runtime detection of corruption. We've repeatedly told
people running syzkaller (including Pengfei) to stop running it on
v4 filesystems and only report bugs on V5 format filesystems. This
is to avoid wasting time triaging these problems back down to v4
specific format bugs that ican only be fixed by moving to the v5
format.

.....

And now after 4 hours, I have found several corruptions in the on
disk format that v5 filesystems will have caught and v4 filesystems
will not.

The AGFL indexes in the AGF have been corrupted. They are within
valid bounds, but first + last != count. On a V5 filesystem we catch
this and trigger an AGFL reset that is done of the first allocation.
v4 filesystems do not do this last - first = count validation at
all.

Further, the AGFL has also been corrupted - it is full of null
blocks. This is another problem that V5 filesystems can catch and
report, but v4 filesystems don't because they don't have headers in
the AGFL that enable verification.

Yes, there's definitely scope for further improvements in validation
here, but the unhandled corruptions that I've found still don't
explain how we got a null perag in the xefi created from a
referenced perag that is causing the crash.

So, yeah, the bisect is completely useless, and I've got half a day
into triage and I still don't have any clue what the corruption is
that is causing the kernel to crash....

----

Do you see the problem now, Eric?

Performing root-cause analysis of syzkaller based malicious
filesystem corruption bugs is anything but simple. It takes hours to
days just to work through triage of a single bug report, and we're
getting a couple of these sorts of bug reported every week.

People who do nothing but turn the bot crank handle throw stuff like
this over the wall at usi are easy to find. Bots and bot crank
turners scale really easily. Engineers who can find and fix the
problems, OTOH, don't.

And just to rub salt into the wounds, we now have people who turn
crank handles on other bots to tell everyone else how important
they think the problem is without having performed any triage at
all. And then we're expected to make an untriaged bug report our
highest priority and immediately spend hours of time to make sense
of the steaming pile that has just been dumped on us.

Worse, we've had people who track regressions imply that if we don't
prioritise fixing regressions ahead of anything else we might be
working on, then we might not get new work merged until the
regressions have been fixed. In my book, that's akin to extortion,
and it might give you some insight to why Darrick reacted so
vigorously to having an untriaged syzkaller bug tracked as a high
visibility, must fix regression.

What we really need is more people who are capable to triaging bug
reports like this instead of having lots of people cranking on bot
handles and dumping untriaged bug reports on the maintainer.
Further, if you aren't capable of triaging the bug report, then you
aren't qualified to classify it as a "must fix" regression.

It's like people don't have any common sense or decency anymore:
it's not very nice to classify a bug as a "must fix" regression
without first having consulted the engineers responsible for that
code. If you don't know what the cause of the bug is, then don't
crank handles that cause people to have to address it immediately!

If nothing changes, then the ever increasing amount of bot cranking
is going to burn us out completely. Nobody wins when that
happens....

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux