I'm experimenting with dm-writecache with one of the little 16G optane SSDs as the cache device in front of a couple hard drives and it's not helping, and I'm curious why not. My test is just to NFS-export the hard drive and the run time tar -xzvf ~/linux-5.9.tar.gz from an NFS client. (So it's reading from local disk and writing to NFS.) NFS servers aren't allowed to reply to clients until operations reach stable storage, for metadata operations like create, unlink, rename, or setattr, so this is kind of a worst case: untar of a kernel tree to local filesystem takes me 12 seconds, but nearly 2 hours to exported hard drives, as that untar is a single thread that ends up waiting for hundreds of thousands of seeks. If I just export the optane, total time is about 4 minutes. If I export a dm-writecache device using the optane, it's back to 2 hours. For now I'm using an xfs filesystem with external journal on the optane, which is sort of OK (about 15 minutes on this test), but I'm curious why dm-writecache is acting like this. Is this expected? Are there any statistics I should be watching to understand what's going on? I'm pretty ignorant here, so it's also possible I just misconfigured something somehow. I set it up with just "lvconvert --type writecache --cachevol optane export", and haven't tried tweaking any options. I'm on recent Fedora (with kernel-5.9.14-200.fc33.x86_64). --b. -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel