On Tue, 6 Sep 2011 at 11:44, Eric Sandeen wrote: > > remount the filesystem rw and then ro again every day. I guess this equals > > the scenario of "fs goes down (remount!) while someone is holding open a > > file"? > > well, no - "goes down" means "crashed or lost power" Hm, the machine and its storage is online all the time and the messages occur inbetween downtimes. > well, as the commit said, it'd be nice to handle it in remount, yes... :( If my daily remounts are causing this, it's unforuntate. But it's nice to know that now. It'd be more worrying that someting else is slowly corrupting the fs. > well, seems like you need to get to the root cause of the unprocessed > orphan inodes. > > I don't yet have my post-vacation thinking cap back on... does cycling > rw/ro/rw/ro with open & unlinked files cause an orphan inode situation? This is almost all I do on this fs. The whole process is: 1) fs is ro most of the time, while a remote application accesses it via a readonly nfs mount. 2) once a day the fs gets remounted rw (the remote application does not know this and is still accessing the fs via the same ro-nfs mount 3) backups are being pushed to the fs (via rsync, using hardlinks a lot) 4) fs is remounted ro again 5) at some point the remote application notices that the nfs mount went stale and has to remount its readonly nfs-mount Thanks, Christian. -- BOFH excuse #93: Feature not yet implemented -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html