Re: Q. cache in squashfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Lougher wrote:


That was discussed on this list back in 2008, and there are pros and cons
to doing this.  You can look at the list archives for the discussion and
so I won't repeat it here.  At the moment I see this as a red herring
because your results suggest something more fundamental is wrong.  Doing
what you did above with the size of the read_page cache should not have
made any difference, and if it did, it suggests pages which *should* be
in the page cache (explicitly pushed there by the read_page() routine) are
not there.  In short its not a question of should Squashfs be using the
page cache, for the pages in question it already is.

I'll try and reproduce your results, as they're to be frank
significantly at variance to my previous experience.  Maybe there's a bug
or VFS changes means the page pushing into the page cache isn't working, but I cannot see where your repeated block reading/decompression results are
coming from.


You can determine which blocks are being repeatedly decompressed by
printing out the value of cache->name in squashfs_cache_get().

You should get one of "data", "fragment" and "metadata" for data
blocks, fragment blocks and metadata respectively.

This information will go a long way in showing where the problem lies.

Phillip

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux