On Tue, May 16, 2023 at 11:42 AM Chris Chilvers <chilversc@xxxxxxxxx> wrote: > > While testing the fscache performance fixes [1] that were merged into 6.4-rc1 > it appears that the caching no longer works. The client will write to the cache > but never reads. > Thanks for the report. If you reboot do you see reads from the cache? You can check if the cache is being read from by looking in /proc/fs/fscache/stats at the "IO" line: # grep IO /proc/fs/fscache/stats IO : rd=80030 wr=0 You might consider saving that file before your test, then another copy after, and doing a diff: # diff -u /tmp/fscachestats.1 /tmp/fscachestats.2 --- /tmp/fscachestats.1 2023-05-16 14:48:43.126158403 -0400 +++ /tmp/fscachestats.2 2023-05-16 14:54:05.421402702 -0400 @@ -1,14 +1,14 @@ FS-Cache statistics -Cookies: n=0 v=1 vcol=0 voom=0 -Acquire: n=0 ok=0 oom=0 -LRU : n=0 exp=0 rmv=0 drp=0 at=0 +Cookies: n=5 v=1 vcol=0 voom=0 +Acquire: n=5 ok=5 oom=0 +LRU : n=0 exp=5 rmv=0 drp=0 at=0 Invals : n=0 -Updates: n=0 rsz=0 rsn=0 +Updates: n=5 rsz=0 rsn=0 Relinqs: n=0 rtr=0 drop=0 NoSpace: nwr=0 ncr=0 cull=6 -IO : rd=0 wr=0 -RdHelp : RA=0 RP=0 WB=0 WBZ=0 rr=0 sr=0 +IO : rd=40015 wr=0 +RdHelp : RA=40015 RP=0 WB=0 WBZ=0 rr=0 sr=0 RdHelp : ZR=0 sh=0 sk=0 RdHelp : DL=0 ds=0 df=0 di=0 -RdHelp : RD=0 rs=0 rf=0 +RdHelp : RD=40015 rs=40015 rf=0 RdHelp : WR=0 ws=0 wf=0 > I suspect this is related to known issue #1. However, I tested the client > with rsize less than, equal to, and greater than readahead, and in all cases > I see the issue. > > If I apply both the patches [2], [3] from the known issues to 6.4-rc1 then the > cache works as expected. I suspect only patch [2] is required but have not > tested patch [2] without [3]. > Agree it's likely only the patches from issue #1 are needed. Let me ping dhowells and willy on that thread for issue #1 as it looks stalled. > Testing > ======= > For the test I was just using dd to read 300 x 1gb files from an NFS > share to fill the cache, then repeating the read. > Can you share: 1. NFS server you're using (is it localhost or something else) 2. NFS version > In the first test run, /var/cache/fscache steadily filled until reaching > 300 GB. The read I/O was less than 1 MB/s, and the write speed was fairly > constant 270 MB/s. > > In the second run, /var/cache/fscache remained at 300 GB, so no new data was > being written. However, the read I/O remained at less than 1 MB/s and the write > rate at 270 MB/s. > > /var/cache/fscache > | 1st run | 2nd run > disk usage | 0 -> 300 GB | 300 GB > read speed | < 1 MB/s | < 1 MB/s > write speed | 270 MB/s | 270 MB/s > > This seems to imply that the already cached data was being read from the source > server and re-written to the cache. > In addition to checking the above for the reads from the cache, you can also see whether NFS reads are going over the wire pretty easily with a similar technique. Copy /proc/self/mounstats to a file before your test, then make a second copy after the test, then run mountstats as follows: mountstats -S /tmp/mountstats.1 -f /tmp/mountstats.2 > Known Issues > ============ > 1. Unit test setting rsize < readahead does not properly read from > fscache but re-reads data from the NFS server > * This will be fixed with another dhowells patch [2]: > "[PATCH v6 2/2] mm, netfs, fscache: Stop read optimisation when > folio removed from pagecache" > > 2. "Cache volume key already in use" after xfstest runs involving > multiple mounts > * Simple reproducer requires just two mounts as follows: > mount -overs=4.1,fsc,nosharecache -o > context=system_u:object_r:root_t:s0 nfs-server:/exp1 /mnt1 > mount -overs=4.1,fsc,nosharecache -o > context=system_u:object_r:root_t:s0 nfs-server:/exp2 /mnt2 > * This should be fixed with dhowells patch [3]: > "[PATCH v5] vfs, security: Fix automount superblock LSM init > problem, preventing NFS sb sharing" > > References > ========== > > [1] https://lore.kernel.org/linux-nfs/20230220134308.1193219-1-dwysocha@xxxxxxxxxx/ > [2] https://lore.kernel.org/linux-nfs/20230216150701.3654894-1-dhowells@xxxxxxxxxx/T/#mf3807fa68fb6d495b87dde0d76b5237833a0cc81 > [3] https://lore.kernel.org/linux-kernel/217595.1662033775@xxxxxxxxxxxxxxxxxxxxxx/ > > -- > Linux-cachefs mailing list > Linux-cachefs@xxxxxxxxxx > https://listman.redhat.com/mailman/listinfo/linux-cachefs > -- Linux-cachefs mailing list Linux-cachefs@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-cachefs