Re: Q. cache in squashfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Lougher:
> What I think you're seeing here is the negative effect of fragment
> blocks (tail-end packing) in the native squashfs example and the
> positive effect of vfs/loop block caching in the ext3 on squashfs example.

Thank you very much for your explanation.
I think the number of cached decompressed fragment blocks is related
too. I thought it is much larger, but I found it is 3 by default. I will
try larger value with/without -no-fragments which you pointed.

Also I am afraid the nested loopback mount will cause caching doubly (or
more), cache by ext3-loopback and by native squashfs loopback, and some
people doesn't want this.
But if user has rich memory and doen't care about nested caching
(because it will be reclaimed when necessary), then I expect the nested
loopback mount will be a good option.
For instance,
- CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE = 1
- inner single ext2 image
- mksquashfs without -no-fragments
- ram 1GB
- the squashfs image size 250MB

Do you think will it be better for very random access pattern?


J. R. Okajima
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux