It is possible to have a flex_bg filesystem with block groups which have inode & block bitmaps at some point well past the start of the group. If an offline shrink puts the new size somewhere between the start of the block group and the (old) location of the bitmaps, they can be left beyond the end of the filesystem, i.e. result in fs corruption. Check each remaining block group for whether its bitmaps are beyond the end of the new filesystem, and reallocate them in a new location if needed. Signed-off-by: Eric Sandeen <sandeen@xxxxxxxxxx> --- I have no idea if this is the right approach or not ;) diff --git a/resize/resize2fs.c b/resize/resize2fs.c index aa2364c..263dea1 100644 --- a/resize/resize2fs.c +++ b/resize/resize2fs.c @@ -854,6 +854,7 @@ static errcode_t blocks_to_move(ext2_resize_t rfs) dgrp_t i, max_groups, g; blk64_t blk, group_blk; blk64_t old_blocks, new_blocks; + blk64_t new_size; unsigned int meta_bg, meta_bg_size; errcode_t retval; ext2_filsys fs, old_fs; @@ -882,6 +883,32 @@ static errcode_t blocks_to_move(ext2_resize_t rfs) fs = rfs->new_fs; /* + * If we're shrinking the filesystem, we need to move any group's + * bitmaps which are beyond the end of the new filesystem. + */ + new_size = ext2fs_blocks_count(fs->super); + if (ext2fs_blocks_count(fs->super) < ext2fs_blocks_count(old_fs->super)) { + for (g = 0; g < fs->group_desc_count; g++) { + /* + * ext2fs_allocate_group_table re-allocates bitmaps + * which are set to block 0. + */ + if (ext2fs_block_bitmap_loc(fs, g) >= new_size) { + ext2fs_block_bitmap_loc_set(fs, g, 0); + retval = ext2fs_allocate_group_table(fs, g, 0); + if (retval) + return retval; + } + if (ext2fs_inode_bitmap_loc(fs, g) >= new_size) { + ext2fs_inode_bitmap_loc_set(fs, g, 0); + retval = ext2fs_allocate_group_table(fs, g, 0); + if (retval) + return retval; + } + } + } + + /* * If we're shrinking the filesystem, we need to move all of * the blocks that don't fit any more */ -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html