David, The information I can provide at the moment is incomplete, but I'm hoping it might shake loose something useful anyway. I'm having what looks like stability problems in testing the performance impact of using fscache & cachefiles with the Lustre client filesystem. I'm trying to demonstrate via iozone that fscache both works and has performance benefits in certain circumstances. The main constraint is that I'm stuck, for now, with EL5 kernels due to copious dependencies in the Lustre 1.6.* source base. This is a big issue, but it looks like it should not be a deal breaker. I'm currently running kernel 2.6.18-53.1.14. fscache / cachefiles works when /var/fscache is just a directory on my boot drive (ext3). That's nice, but my boot drive is not faster than my network (I can stream about 700MB/s over infiniband, and my lustre object servers can keep up with that). My boot drive is good for about 60MB/s. I have a giant ramdisk appliance (Violin), as well as other fast disk options, but cachefilesd always dies when /var/fscache is a mount point rather than a subdirectory. Here are some details (an incomplete matrix of the possibilities, but hopefully useful): - With ext3 on the ramdisk, cachefilesd dies on startup. - With xfs on the ramdisk, cachefilesd dies when I start doing I/O that includes writing to fscache. - With xfs on the ramdisk, mounted with the "nobarrier" option, cachefilesd does not die, but data is not written to the ramdisk. - With xfs on a regular disk (with nobarrier), cachefilesd dies when I write to the fscache. The only consistency appears to be that no scenario where /var/fscache is a mount point works. I would appreciate any ideas, and let me know if more info is needed. I have not looked under the hood of cachefiles or cachefilesd. I was hoping not to have to. Cachefiles came in the kernel, and I installed cachefilesd with yum. Thanks for any ideas, John Groves -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cachefs