On Wed, Sep 07, 2016 at 01:12:12PM -0800, Kent Overstreet wrote: > So, right now we're checking i_nlinks on every mount - mainly the dirents > implementation predates the transactional machinery we have now. That's almost > definitely what's taking so long, but I'll send you a patch to confirm later. I just pushed a patch to add printks for the various stages of recovery: use mount -o verbose_recovery to enable. How many files does this filesystem have? (df -i will tell you). As another data point, on my laptop mounting takes half a second - smallish filesystem though, 47 gb of data and 711k inodes (and it's on an SSD). My expectation is that mount times with the current code will be good enough as long as you're using SSDs (or tiering, where tier 0 is SSD) - but I could use more data points. Also, increasing the btree node size may help, if you're not already using max size btree nodes (256k). I may readd prefetching to metadata scans too, that should help a good bit on rotating disks... Mounting taking 12 minutes (and the amount of IO you were seeing) implies to me that a metadata isn't being cached as well as it should be though, which is odd considering outside of journal replay we aren't doing random access, all the metadata access is inorder scans. So yeah, definitely want that timing information... -- To unsubscribe from this list: send the line "unsubscribe linux-bcache" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html