On Thu, Jul 02, 2020 at 08:18:39AM +1000, Dave Chinner wrote: > On Wed, Jul 01, 2020 at 10:31:21AM -0400, Brian Foster wrote: > > On Tue, Jun 23, 2020 at 07:50:14PM +1000, Dave Chinner wrote: > > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > > > > > Currently we cache unlinked inode list backrefs through a separate > > > cache which has to be maintained via memory allocation and a hash > > > table. When the inode is on the unlinked list, we have an existence > > > guarantee for the inode in memory. > > > > > > That is, if the inode is on the unlinked list, there must be a > > > reference to the inode from the core VFS because dropping the last > > > reference to the inode causes it to be removed from the unlinked > > > list. Hence if we hold the AGI locked, we guarantee that any inode > > > on the unlinked list is pinned in memory. That means we can actually > > > track the entire unlinked list on the inode itself and use > > > unreferenced inode cache lookups to update the list pointers as > > > needed. > > > > > > However, we don't use this relationship because log recovery has > > > no in memory state and so has to work directly from buffers. > > > However, because unlink recovery only removes from the head of the > > > list, we can easily fake this in memory state as the inode we read > > > in from the AGI bucket has a pointer to the next inode. Hence we can > > > play reference leapfrog in the recovery loop always reading the > > > second inode on the list and updating pointers before dropping the > > > reference to the first inode. Hence the in-memory state is always > > > valid for recovery, too. > > > > > > This means we can tear out the old inode unlinked list cache and > > > update mechanisms and replace it with a much simpler "insert" and > > > "remove" functions that use in-memory inode state rather than on > > > buffer state to track the list. The diffstat speaks for itself. > > > > > > Food for thought: This obliviates the need for the on-disk AGI > > > unlinked hash - we because we track as a double linked list in > > > memory, we don't need to keep hash chains on disk short to minimise > > > previous inode lookup overhead on list removal. Hence we probably > > > should just convert the unlinked list code to use a single list > > > on disk... > > > > > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > > --- > > > > Looks interesting, but are you planning to break this up into smaller > > pieces? E.g., perhaps add the new inode pointers and set them in one > > patch, then replace the whole backref thing with the in-core pointers, > > then update the insert/remove, then log recovery, etc. > > Likely, yes. > > > I'm sure there's > > various ways it can or cannot be split, but regardless this patch looks > > like it could be a series in and of itself. > > This RFC series is largely centered around this single patch, so > splitting it out into a separate series makes no sense. > I was just speaking generally that this patch looked quite overloaded. I don't mean to imply it should be separated from the others. > FWIW, This is basically the same sort of thing that the inode > flushing patchset started out as - a single patch that I wrote in > few hours and got working as a whole. It does need to be split up, > but given that the inode flushing rework took several months to turn > a few hours of coding into a mergable patchset, I haven't bothered > to do that for this patch set yet. > Understood. > I'd kinda like to avoid having this explode into 30 patches as that > previous patchset did - this is a very self-contained change, so > there's really only 4-5 pieces it can be split up into. Trying to > split it more finely than that is going to make it quite hard to > find clean places to split it... > I'm not expecting 30 patches. :) The quick flow I noted above, perhaps with the addition of pushing refactoring changes towards the end, lands right around 4 or 5 and seems like it would improve reviewability quite a bit. Of course things change that might lead to more or less once you get into the details/context of breaking things down... Brian > Cheers, > > Dave. > -- > Dave Chinner > david@xxxxxxxxxxxxx >