Hi folks, I've had time to forward port the non-blocking inodegc push changes I had in a different LOD to the current for-next tree. I've run it through fstests auto group a couple of times and it hasn't caused any space accounting related failures on the machines I've run it on. The first patch introduces the bound maximum work start time for the inodegc queues - it's short, only 10ms (IIRC), but we don't want to delay inodegc for an arbitrarily long period of time. However, it means that work always starts quickly and so that reduces the need for statfs() to have to wait for background inodegc to start and complete to catch space "freed" by recent unlinks. The second patch converts statfs to use a "push" rather than a "flush". The push simply schedules any pending work that hasn't yet timed out to run immediately and returns. It does not wait for the inodegc work to complete - that's what a flush does, and that's what caused all the problems for statfs(). Hence statfs() is converted to push semantics at the same time, thereby removing the blocking behaviour it currently has. This should prevent a large amount of the issues that Chris has been seeing with lots of processes stuck in statfs() - that will no long happen. The only time user processes should get stuck now is when the inodegc throttle kicks in (unlinks only at this point) or if we are waiting for a lock a long running inodegc operation holds to be released. We had those specific problems before background inodegc - they manifested as unkillable unlink operations that had every backed up on them instead of background inodegc that has everything backed up on them. Hence I think these patches largely restore the status quo that we had before the background inodegc code was added. Comments, thoughts and testing appreciated. Cheers, Dave.