On Mon, Sep 16, 2019 at 09:20:05AM -0700, Darrick J. Wong wrote: > On Wed, Sep 11, 2019 at 11:21:07AM +1000, Dave Chinner wrote: > > From: Dave Chinner <dchinner@xxxxxxxxxx> > > And that's the behaviour I just saw in a nutshell. The on disk count > > is correct, but once the tree is loaded into memory, it goes whacky. > > Clearly there's something wrong with xfs_iext_count(): > > > > inline xfs_extnum_t xfs_iext_count(struct xfs_ifork *ifp) > > { > > return ifp->if_bytes / sizeof(struct xfs_iext_rec); > > } > > > > Simple enough, but 134M extents is 2**27, and that's right about > > On the plus side, 2^27 is way better than the last time anyone tried to > create an egregious number of extents. Well, we'd get to 2^26 (~65M extents) before memory allocation stopped progress... > > Current testing is at over 500M extents and still going: > > > > fsxattr.nextents = 517310478 > > > > Reported-by: Zorro Lang <zlang@xxxxxxxxxx> > > Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> > > Looks reasonable to me; did Zorro retest w/ this patch? No idea, but I got to 1.3B extents before the VM ran out of RAM and oom-killed itself to death - the extent list took up >47GB of the 48GB of RAM I gave the VM. At some point we are going to have to think about demand paging extent lists.... > If so, > Reviewed-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx> Thanks! -Dave. -- Dave Chinner david@xxxxxxxxxxxxx