On Mon, Jul 30, 2018 at 10:56:01AM +0200, Jaco Kroon wrote: > > Is there any way to mark those blocks that's being freed to not be > re-used? I was contemplating setting them as badblocks using fsck so > that I can online the filesystem in cycles so that I can get backups to > function overnight, when they are done in the morning, offline and > perform the next cycle? So you can use debugfs's setb, but then you can't use the allocation bitmap to check to see whether you have accounted for all of the groups. If you are willing to modify and recompile the kernel, you could just make a simple hack to ext4_mb_good_group() in fs/ext4/mballoc.c, and add something like this at the very beginning of the funcion: /* replace XXX with the block group you are trying to evacuate */ if (group == XXXX) return 0; This will cause ext4 to not allocate blocks in that block group. Similarly, instead of just specifying all of the blocks to the icheck command, you could modify and recompile debugfs, and do something like this at the beginning of icheck_proc(): /* replace YYYY with the first block in the block group you are trying to evacuate */ if (*block_nr > YYYY) { printf("I: %lu\n", bw->inode); return 0; } This is super hacky since it would dedup the list of inodes, but you can just save the output to the file, and then do something like this: grep -v "^I: " < debugfs.out | sed -e 's/I: //' | sort -u > /tmp/list-of-inos Finally, a much simpler thing to do instead of copying it to the file system you are trying to work on, is to simply copy the file somewhere *else*. You only need to copy the files that have blocks in the last block group, and that's very likely less than a gig or two, so you can probably find enough swing space on another scratch disk (even if you have to use a USB attached HDD) as the destination. Then you don't need to do the hack described above to prevent allocations to that last block group. Good luck, - Ted