On Tue, Jan 24, 2017 at 11:19:18AM -0500, Brian Foster wrote: > On Tue, Jan 24, 2017 at 03:49:37PM +0100, Christoph Hellwig wrote: > > On Tue, Jan 24, 2017 at 09:06:49AM -0500, Brian Foster wrote: > > > Darrick called out in the previous version that this requires traversal > > > of the entire tree at mount time. Do you have any test results on what > > > kind of worst case mount delays we could be looking at here? > > > > Even with pretty horribly fragmented file systems I've not seen > > major delays. But I don't have a setup with a lot of actual disks > > but mostly SSDs these days, so this might not statistically significant. > > Heh, I might have some systems with slow storage around. ;P It may take > a little time to populate a large enough fs with inodes though.. <anecdote> So on this laptop, we have: $ df -i /storage/; df /storage/ Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/birch_disk-storage 466M 1.8M 464M 1% /storage Filesystem Size Used Avail Use% Mounted on /dev/mapper/birch_disk-storage 931G 525G 407G 57% /storage $ sudo xfs_io -c 'fsmap -v -n 1024' . | grep 'inode btree' | \ awk '{moo[$6] += $8}END{for (x=0;x<=255;x++) if (x in moo) print x, moo[x]}' 0 144 1 160 2 152 3 136 4 152 5 152 6 168 7 152 So on average we have ~160 sectors (or about 20 blocks) of inobt/finobt in each of 8 AGs. </anecdote> --D > > Brian > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in > > the body of a message to majordomo@xxxxxxxxxxxxxxx > > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- > To unsubscribe from this list: send the line "unsubscribe linux-xfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html