On Mon, Sep 26, 2011 at 11:24:31AM -0500, Eric Sandeen wrote: > > > > bunzip2 < hda1.e2i.bz2 | make-sparse hda1.e2i > > > > ... and this creates a sparse file in hda1.e2i. > > or | cp --sparse=always /dev/stdin sparse.img works too. > > But have you ever tried this with a multi-terabyte image? > > It takes -forever- to process all those 0s, with cpus pegged. Yeah, I didn't realize until I read another message on this thread that bzip2's CPU problems were causing problems. Is gzip sufficiently better, I wonder, or is it still problematic? > Ted, your concern about space - it doesn't take the full fs size worth > of space, right, just the metadata space? So in general it should not > be THAT much ... Yes, it's just the metadata space that I was worried about. So it's not *that* much, but it still adds up on large systems. But then again, on large systems we precisely have the problem of bzip2 taking forever. If we decide that we're OK with not compressing qcow2, we could use qcow2. But note that the qcow2 format is still very compressible --- it looks like it could do a better job removing zero blocks. (I had a 256meg qcow2 e2image file compress down to 9 megs.) Unfortunately we can't do stream compression with qcow2. Long run I think we should make the qcow2 support better (by dropping all-zero blocks, and adding support for qcow2 to debugfs/dumpe2fs/e2fsck, and perhaps adding support for native compression). Anyone looking for a project? :-) - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html