Re: [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/03/2010 09:13 PM, Dan Magenheimer wrote:
>> On 06/03/2010 10:23 AM, Andreas Dilger wrote:
>>> On 2010-06-02, at 20:46, Nitin Gupta wrote:
>>
>>> I was thinking it would be quite clever to do compression in, say,
>>> 64kB or 128kB chunks in a mapping (to get decent compression) and
>>> then write these compressed chunks directly from the page cache
>>> to disk in btrfs and/or a revived compressed ext4.
>>
>> Batching of pages to get good compression ratio seems doable.
> 
> Is there evidence that batching a set of random individual 4K
> pages will have a significantly better compression ratio than
> compressing the pages separately?  I certainly understand that
> if the pages are from the same file, compression is likely to
> be better, but pages evicted from the page cache (which is
> the source for all cleancache_puts) are likely to be quite a
> bit more random than that, aren't they?
> 


Batching of pages from random files may not be so effective but
it would be interesting to collect some data for this. Still,
per-inode batching of pages seems doable and this should help
us get over this problem.

Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux