Re: [PATCH RFC 2/4] xfs: defer agfl block frees when dfops is available

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 08, 2018 at 04:56:03PM -0500, Brian Foster wrote:
> cc Darrick
> 
> On Fri, Dec 08, 2017 at 09:16:30AM -0500, Brian Foster wrote:
> > On Fri, Dec 08, 2017 at 09:41:26AM +1100, Dave Chinner wrote:
> > > On Thu, Dec 07, 2017 at 01:58:08PM -0500, Brian Foster wrote:
> ...
> > > 
> > > Ok, so it uses the EFI/EFD to make sure that the block freeing is
> > > logged and replayed. So my question is:
> > > 
> > > > +/*
> > > > + * AGFL blocks are accounted differently in the reserve pools and are not
> > > > + * inserted into the busy extent list.
> > > > + */
> > > > +STATIC int
> > > > +xfs_agfl_free_finish_item(
> > > > +	struct xfs_trans		*tp,
> > > > +	struct xfs_defer_ops		*dop,
> > > > +	struct list_head		*item,
> > > > +	void				*done_item,
> > > > +	void				**state)
> > > > +{
> > > 
> > > How does this function get called by log recovery when processing
> > > the EFI as there is no flag in the EFI that says this was a AGFL
> > > block?
> > > 
> > 
> > It doesn't...
> > 
> > > That said, I haven't traced through whether this matters or not,
> > > but I suspect it does because freelist frees use XFS_AG_RESV_AGFL
> > > and that avoids accounting the free to the superblock counters
> > > because the block is already accounted as free space....
> > > 
> > 
> > I don't think it does matter. I actually tested log recovery precisely
> > for this question, to see whether the traditional EFI recovery path
> > would disrupt accounting or anything and I didn't reproduce any problems
> > (well, except for that rmap record cleanup failure thing).
> > 
> > However, I do still need to trace through and understand why that is, to
> > know for sure that there aren't any problems lurking here (and if not, I
> > should probably document it), but I suspect the reason is that the
> > differences between how agfl and regular blocks are handled here only
> > affect in-core state of the AG reservation pools. These are all
> > reinitialized from zero on a subsequent mount based on the on-disk state
> > (... but good point, and I will try to confirm that before posting a
> > non-RFC variant).
> > 
> 
> After catching back up with this and taking a closer look at the code, I
> can confirm that generic EFI recovery works fine for deferred AGFL block
> frees. What happens is essentially that the slot is freed and the block
> free is deferred in a particular tx. If we crash before that tx commits,
> then obviously nothing changes and we're fine. If we crash after that tx
> commits, EFI recovery frees the block and the AGFL reserve pool
> adjustment is irrelevant as the in-core res counters are initialized
> from the current state of the fs after log recovery has completed (so
> even if we knew this was an agfl block, attempting reservation
> adjustments at recovery time would probably be wrong).
> 
> That aside, looking through the perag res code had me a bit curious
> about why we reserve all agfl blocks in the first place. IIUC, the AGFL
> reserve pool actually serves the rmapbt, since that (and that alone) is
> what the mount time reservation is based on. AGFL blocks can be used for
> other purposes, however, and the current runtime reservation is adjusted
> based on all AGFL activity. Is there a reason this reserve pool does not
> specifically target rmapbt allocations? Doesn't not doing so allow some
> percentage of the rmapbt reserved blocks to be consumed by other
> structures (alloc btrees) until/unless the fs is remounted? I'm
> wondering if specifically there's a risk of something like the
> following:
> 
> - mount fs with some number N of AGFL reserved blocks based on current
>   rmapbt state. Suppose the size of the rmapbt is R.
> - A bunch of agfl blocks are used over time (U). Suppose 50% of those go
>   to the rmapbt and the other 50% to alloc btrees and whatnot.
>   ar_reserved is reduced by U, but R only increases by U/2.
> - a bunch more unrelated physical allocations occur and consume all
>   non-reserved space
> - the fs unmounts/mounts and the perag code looks for the remaining U/2
>   blocks to reserve for the rmapbt, but not all of those blocks are
>   available because we depleted the reserved pool faster than the rmapbt
>   grew.
> 
> Darrick, hm? FWIW, a quick test to allocate 100% of an AG to a file,
> punch out every other block for a few minutes and then remount kind of
> shows what I'm wondering about:
> 
>  mount-3215  [002] ...1  1642.401846: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 259564 flcount 6 resv 3193 ask 3194 len 3194
> ...
>  <...>-28260 [000] ...1  1974.946866: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36317 flcount 9 resv 2936 ask 3194 len 1
>  <...>-28428 [002] ...1  1976.371830: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36483 flcount 9 resv 2935 ask 3194 len 1
>  <...>-28490 [002] ...1  1976.898147: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2934 ask 3194 len 1
>  <...>-28491 [002] ...1  1976.907967: xfs_ag_resv_alloc_extent: dev 253:3 agno 0 resv 2 freeblks 36544 flcount 9 resv 2933 ask 3194 len 1
> umount-28664 [002] ...1  1983.335444: xfs_ag_resv_free: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 2932 ask 3194 len 0
>  mount-28671 [000] ...1  1991.396640: xfs_ag_resv_init: dev 253:3 agno 0 resv 2 freeblks 36643 flcount 10 resv 3038 ask 3194 len 3194

Yep, that's a bug.  Our current agfl doesn't have much of a way to
signal to the perag reservation code that it's using blocks for the
rmapbt vs. any other use, so we use up the reservation and then pull
from the non-reserved free space after that, with the result that
sometimes we can blow the assert in xfs_ag_resv_init.  I've not been
able to come up with a convincing way to fix this problem, largely
because of:

> We consume the res as we go, unmount with some held reservation value,
> immediately remount and the associated reservation has jumped by 100
> blocks or so. (Granted, whether this can manifest into a tangible
> problem may be another story altogether.).

It's theoretically possible -- even with the perag reservation
functioning perfectly we still can run the ag totally out of blocks if
the rmap expands beyond our assumed max rmapbt size of max(1 record per
block, 1% of the ag).

Say you have agblocks = 11000, allocate 50% of the AG to a file, then
reflink that extent into 10000 other files.  Then dirty every other
block in each of the 10000 reflink copies.  Even if all the dirtied
blocks end up in some other AG, we've still expanded the number of rmap
records from 10001 to 5000 * 10000 == 50,000,000, which is much bigger
than our original estimation.

Unfortunately the options here aren't good -- we can't reserve enough
blocks to cover the maximal rmapbt when reflink is enabled, so the best
we could do is fail write_begin with ENOSPC if the shared extent's AG is
critically low on reservation, but that kinda sucks because at that
point the CoW /should/ be moving blocks (and therefore mappings) away
from the full AG.

(We already cut off reflinking when the perag reservations are
critically low, but that's only because we assume the caller will fall
back to a physical copy and that the physical copy will land in some
other AG.)

--D

> 
> Brian
> 
> > Brian
> > 
> > > Cheers,
> > > 
> > > Dave.
> > > -- 
> > > Dave Chinner
> > > david@xxxxxxxxxxxxx
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux