> I'm not 100% sure it's related (but I'm going to guess it is) but on > these same boxes, they're not actually able to reboot at the end of a > graceful shutdown. After yielding that bug and continuing with the > shutdown process, it gets all the way to exec'ing: > > reboot -d -f -i > > and then just hangs forever. I'm guessing a thread is hung still > trying to unmount things. On another box, I triggered that bug with a > umount of one top-level mount that had subtrees. When I umount'd > another top-level mount with subtrees on that same box, it's blocked > and unkillable. That second umount also logged another bug to the > kernel logs. > > In both umounts described above, the entries in /proc/mounts go away > after the umount. > > Jeff, are you at liberty to do a graceful shutdown of the box you saw > that bug on? If so, does it actually reboot? A bit more info: On the same boxes, freshly booted but with all the same mounts (even the subtrees) mounted, I don't get that bug, so it seems to happen just when there's been significant usage within those mounts. These are all read-only mounts, if it makes a difference. I was however able to trigger the bug on a box that had been running (web serving) for about 15 minutes. Here's a snippet from slabinfo right before umount'ing (let me know if more of it would help): # grep nfs /proc/slabinfo nfsd4_delegations 0 0 360 22 2 : tunables 0 0 0 : slabdata 0 0 0 nfsd4_stateids 0 0 120 34 1 : tunables 0 0 0 : slabdata 0 0 0 nfsd4_files 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 nfsd4_stateowners 0 0 424 38 4 : tunables 0 0 0 : slabdata 0 0 0 nfs_direct_cache 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 nfs_write_data 46 46 704 23 4 : tunables 0 0 0 : slabdata 2 2 0 nfs_read_data 207 207 704 23 4 : tunables 0 0 0 : slabdata 9 9 0 nfs_inode_cache 23901 23901 1056 31 8 : tunables 0 0 0 : slabdata 771 771 0 nfs_page 256 256 128 32 1 : tunables 0 0 0 : slabdata 8 8 0 -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html