On Mon, Jun 23, 2014 at 08:09:37AM +0200, Killian De Volder wrote: > It's still checking due to the high amount of ram it's using. > However if I start a parallel check with -nf if find other errors the one with the high memory usage hasn't found yet ? No, definitely not that! Running two e2fsck's in parallel will do far more harm than good. > Should I start a new one, or is this not advised ? > As sometimes I think it's bad inodes causing artificial usage of memory. What part of the e2fsck run are you in? If you are in passes 1b/1c/1d, then one of the things you can do is to analyze the log output to date, and individually investigate the inodes that were reported as bad using debugfs. You could then backup what was worth backuping up out of those inodes, and then use the debugfs "clri" command to zap the bad inode. I have done that to reduce the number of bad inodes to make e2fsck pass 1b, 1c, and 1d run faster. But I've never done it on a really huge file system, and it may not be worth the effort. What I'd probably do instead is to edit e2fsck to skip pass 1b, 1c, and 1d, and then hope for the best. The file system will still be corrupted, and there is the chance that you will do some damage in the later passes because you skipped passes 1b/c/d, but if the goal is to get the file system in a state where you can safely mount it read-only, that would probably be your best bet. - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html