Re: [PATCH v2] xfs: byte range buffer dirty region tracking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 13, 2018 at 08:15:26AM -0500, Brian Foster wrote:
> On Tue, Feb 13, 2018 at 08:18:24AM +1100, Dave Chinner wrote:
> > On Mon, Feb 12, 2018 at 09:26:19AM -0500, Brian Foster wrote:
> > > :/ So it seems to
> > > me this breaks a technically valid case in weird/subtle ways. For
> > > example, why assert about last == 0, but then go on to add the range
> > > anyways, explicitly not size it correctly, but then format it as if
> > > nothing is wrong? If it were really wrong/invalid (which I don't think
> > > it is), why not put the check in the log side and skip adding the range
> > > rather than add it, skip sizing it, and then format it.
> > 
> > So what you're really concerned about is that I put asserts into the
> > code to catch broken development code, but then allow production
> > systems through without caring whether it works correctly because
> > that boundary condition will never occur during runtime on
> > production systems?
> 
> No. As already mentioned in my previous mail, I care little about the
> asserts. Asserts can easily be removed if they turn out to be bogus.
> Wrong asserts tend to have little negative effect on production users
> because along with only affecting debug kernels, they'd have to be
> fairly rare to slip through our testing. So I'm perfectly _happy_ to be
> cautious with regard to asserts.
> 
> What I care much more about is not leaving latent bugs around in the
> code. IMO, there is very rarely good enough justification to knowingly
> commit buggy/fragile code to the kernel,

Hold on a minute!

I'm not asking anyone to commit buggy or fragile code. I've already
fixed the off-by-one problems you've pointed out, and all I was
trying to do was understand what you saw wrong with the asserts to
catch a "should never happen" condition so I could change it in a
way that you'd find acceptible.

There's no need to shout and rant at me....

> ... having said all that and having already wasted more time on this
> than it would have taken for you to just fix the patch, I'll end my rant
> with this splat[1]. It demonstrates the "boundary condition" that "will
> never occur during runtime on production systems" (production system
> level output included for extra fun ;P).

This is a pre-existing bug in xlog_cil_insert_format_items()
that my change has exposed:

                /* Skip items that do not have any vectors for writing */
		if (!shadow->lv_niovecs && !ordered)
			continue;

The code I added triggers this (niovecs == 0), and that now gives
us the case where we have a dirty log item descriptor
(XFS_LID_DIRTY) without a log vector attached to item->li_lv.
Then in xlog_cil_insert_items():

                /* Skip items which aren't dirty in this transaction. */
                if (!(lidp->lid_flags & XFS_LID_DIRTY))
                        continue;

                /*
                 * Only move the item if it isn't already at the tail. This is
                 * to prevent a transient list_empty() state when reinserting
                 * an item that is already the only item in the CIL.
                 */
                if (!list_is_last(&lip->li_cil, &cil->xc_cil))
                        list_move_tail(&lip->li_cil, &cil->xc_cil);


We put that "clean" log item on the CIL because XFS_LID_DIRTY is
set, and then when we push the CIL in xlog_cil_push(), we trip over
a dirty log item without a log vector when chaining log vectors to
pass to the log writing code here:

        while (!list_empty(&cil->xc_cil)) {
                struct xfs_log_item     *item;

                item = list_first_entry(&cil->xc_cil,
                                        struct xfs_log_item, li_cil);
                list_del_init(&item->li_cil);
                if (!ctx->lv_chain)
                        ctx->lv_chain = item->li_lv;
                else
                        lv->lv_next = item->li_lv;       <<<<<<<<<
 >>>>>>>>       lv = item->li_lv;
                item->li_lv = NULL;
                num_iovecs += lv->lv_niovecs;
        }

i.e. lv ends up null part way through the log item chain we are
processing and the next loop iteration fails.

IOWs, the bug isn't in the patch I wrote - it has uncovered a
latent bug added years ago for a condition that had never, ever been
exercised until now.

Brian, can you now give me all the details of what you were doing to
produce this and turn on CONFIG_XFS_DEBUG so that it catches the
zero length buffer that was logged when it happens?  That way I can
test a fix for this bug and that the buffer range logging exercises
this case properly...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux