I performed some recovery (fsck) tests with large EXT4 filesystem. The filesystem size was 500GB (3 million files, 5000 directories). Perfomed force recovery on the clean filesystem and measured the memory usage, which was around 2 GB. Then I performed metadata corruption - 10% of the files, 10% of the directories and some superblock attributes using debugfs. Then I executed fsck to find a memory usage of around 8GB, a much larger value. 1. Is there a way to reduce the memory usage (apart from scratch_files option as it increases the recovery time time) 2. This question is not related to this EXT4 mailing list. But in real scenario how this kind of situation (large memory usage) is handled in large scale filesystem deployment when actual filesystem corruption occurs (may be due to some fault in hardware/controller) -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html