I am a bit confused on how to fix the online resize on filesystems using flex bg. First, this is roughly how resize currently works (correct me if I am wrong): without flexbg : resize2fs extends the last group until its full size with an ioctl called GROUP_EXTEND. Then it "prepares" a new group. That is to say, it computes which blocks will contains the meta-datas for the new group, then it issue a GROUP_ADD ioctl with those block numbers. This works both for online and offline resize because new groups meta-datas are created outside the working filesystem. with flexbg : It works the same way but this time, meta-datas blocks for new groups are created inside the working filesystem (in a group containing meta-datas for the whole flex group). resize2fs scans from the end of the last flex_group meta-datas until it finds enough space to put the new meta-datas. This is not a problem when resizing offline, but when online, the blocks found for the meta-datas may be allocated by someone else before the GROUP_ADD ioctl occurs. I am not sure how to handle this. I guess that resize2fs should be able to find and allocate the meta-datas blocks without being disturbed by other process. But it could mean a long time blocking all processes accessing the filesystem while it searchs for free blocks. That said, resizing is not done very often so it could be acceptable. Moreover I guess that using this way of doing things means leaving the kernel side compute the meta-datas blocks instead of let the userland resize2fs manage it. Another approach I think of could be to deliberately write new groups' meta-datas outside the working filesystems (just like non flex_bg groups). But this will break the "grouped meta-datas" logic of flex_bg. We could limit this breakage to the last flex_group of the resized fs if we add some sort of FLEXGROUP_ADD ioctl which allow to add whole clean flex_groups to the filesystem. Any comments/suggestions are welcome. Fred -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html