>Every stackable file system caches the >data at its own level and copies it from/to the lower file system's cached >pages when necessary. ... this effectively >reduces the system's cache memory size by two or more times. It should not be that bad with a decent cache replacement policy; I wonder if observing the problem (that you corrected in the various ways you've described), you got some insight as to what exactly was happening. In the classic case of multiple caches, where each cache has a fixed size (example: cache in the disk drive + cache in the operating system), the caches tend to contain different data. The most frequently accessed data is in the near cache and the less frequently in the far cache (that's because frequent accesses to a piece of data are always near cache hits, so the far cache never sees them and considers that data once-only). In the stacked filesystem case, it should be even better because it's all one pool of memory. The far cache should shrink down to nothing, since anything that might have been a hit in that cache is a hit in the near cache first. There are certainly simplistic cache replacement algorithms, and specific workloads that defeat that. Straight LRU with lots of once-only accesses would tend to generate twice as much cache waste. But the reduction in useful cache space would be less than half, because at least some of the pages are frequently accessed, so stored only once. I lost track of the Linux cache replacement policy years ago, but it used to have a second-chance element that should measure frequency well enough to stop this cache duplication -- a page read from a file was on the inactive list until it got referenced again, so it could not stay in memory long when there was contention for memory. I believe this would make the far cache pages always inactive, so essentially not consuming resource. So I'd be interested to know by what mechanism stacked filesystems have drastically reduce cache efficiency in your experiments and whether a simple policy change might solve the problem as well as the more complex approach of getting an individual filesystem driver more involved in memory management. -- Bryan Henderson IBM Almaden Research Center San Jose CA Filesystems - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html