Unfortunately, there has been a huge number of bug fixes for ext4's online resize since 2.6.32 and 1.42.11. It's quite possible that you hit one of them. > The 51.8% seems very suspicious to me. A few weeks ago, I did an online > resize2fs, and the original filesystem was about 52% the size of the new > one (from 2.7TB to 5.3TB). The resize2fs didn't report any errors, and > I haven't seen any physical errors in the logs, so this is the first > indication I've had of a problem. Well, actually it's not quite that simple. There are multiple passes to e2fsck, and the first pass is estimated to be 70% of the total e2fsck run. So 51.8% reported by the progress means e2fsck had gotten 74% of the way through pass 1. So that would mean that it had got through about inodes associated to about 3.9TB into the file system. That being said, it's pretty clear that portions of the inode table and block group descriptor was badly corrupted. So I suspect there isn't going to be much that can be done to try to repair the file system completely. If there are specific files you need to recover, I'd suggest trying to recover them first before trying to do anything else. The good news is that probably around 75% of your files can probably be recovered. - Ted _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users