On 3/15/22 6:23 PM, Darrick J. Wong wrote: > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > Currently, we don't let an internal log consume every last block in an > AG. According to the comment, we're doing this to avoid tripping AGF > verifiers if freeblks==0, but on a modern filesystem this isn't > sufficient to avoid problems. First, the per-AG reservations for > reflink and rmap claim up to about 1.7% of each AG for btree expansion, Hm, will that be a factor if the log consumes every last block in that AG? Or is the problem that if we consume "most" blocks, that leaves the possibility of reflink/rmap btree expansion subsequently failing because we do have a little room for new allocations in that AG? Or is it a problem right out of the gate because the per-ag reservations collide with a maximal log before the filesystem is even in use? > and secondly, we need to have enough space in the AG to allocate the > root inode chunk, if it should be the case that the log ends up in AG 0. > We don't care about nonredundant (i.e. agcount==1) filesystems, but it > can also happen if the user passes in -lagnum=0. > > Change this constraint so that we can't leave less than 5% free space > after allocating the log. This is perhaps a bit much, but as we're > about to disallow tiny filesystems anyway, it seems unlikely to cause > problems with scenarios that we care about. This is only modifying the case where we automatically calculated a log size, and doesn't affect a manually-specified size. Is that intentional? (I guess we already had this discrepancy, whether it was the old "-1" heuristic or the new "95%" heuristic... But 5% is likely to be a fair bit bigger than 1 block, so I'm wondering if the manually-specified case needs to be limited as well. Thanks, -Eric