> On 06/03/2010 10:23 AM, Andreas Dilger wrote: > > On 2010-06-02, at 20:46, Nitin Gupta wrote: > > > I was thinking it would be quite clever to do compression in, say, > > 64kB or 128kB chunks in a mapping (to get decent compression) and > > then write these compressed chunks directly from the page cache > > to disk in btrfs and/or a revived compressed ext4. > > Batching of pages to get good compression ratio seems doable. Is there evidence that batching a set of random individual 4K pages will have a significantly better compression ratio than compressing the pages separately? I certainly understand that if the pages are from the same file, compression is likely to be better, but pages evicted from the page cache (which is the source for all cleancache_puts) are likely to be quite a bit more random than that, aren't they? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html