Hi, I've been experimenting a bit more with high latency NFSv4.2 (200ms). I've noticed a difference between the file creation rates when you have parallel processes running against a single client mount creating files in multiple directories compared to in one shared directory. If I start 100 processes on the same client creating unique files in a single shared directory (with 200ms latency), the rate of new file creates is limited to around 3 files per second. Something like this: # add latency to the client sudo tc qdisc replace dev eth0 root netem delay 200ms sudo mount -o vers=4.2,nocto,actimeo=3600 server:/data /tmp/data for x in {1..10000}; do echo /tmp/data/dir1/touch.$x done | xargs -n1 -P 100 -iX -t touch X 2>&1 | pv -l -a > /dev/null It's a similar (slow) result for NFSv3. If we run it again just to update the existing files, it's a lot faster because of the nocto,actimeo and open file caching (32 files/s). Then if I switch it so that each process on the client creates hundreds of files in a unique directory per process, the aggregate file create rate increases to 32 per second. For NFSv3 it's 162 aggregate new files per second. So much better parallelism is possible when the creates are spread across multiple remote directories on the same client. If I then take the slow 3 creates per second example again and instead use 10 client hosts (all with 200ms latency) and set them all creating in the same remote server directory, then we get 3 x 10 = 30 creates per second. So we can achieve some parallel file create performance in the same remote directory but just not from a single client running multiple processes. Which makes me think it's more of a client limitation rather than a server locking issue? My interest in this (as always) is because while having hundreds of processes creating files in the same directory might not be a common workload, it is if you are re-exporting a filesystem and multiple clients are creating new files for writing. For example a batch job creating files in a common output directory. Re-exporting is a useful way of caching mostly read heavy workloads but then performance suffers for these metadata heavy or writing workloads. The parallel performance (nfsd threads) with a single client mountpoint just can't compete with directly connected clients to the originating server. Does anyone have any idea what the specific bottlenecks are here for parallel file creates from a single client to a single directory? Cheers, Daire