Re: parallel file create rates (+high latency)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This seemed like a good test case for Neil Brown's "namespaces" patch:

https://lore.kernel.org/linux-nfs/162458475606.28671.1835069742861755259@xxxxxxxxxxxxxxxxxxxxx

The interesting thing about this is that we get independent slot
tables for the same remote server (and directory).

So we can test like this:

# mount server 10 times with a different namespace
for x in {0..9}; do
    sudo mkdir -p /srv/data-$x
    sudo mount -o vers=4.2,namespace=server${x},actimeo=3600,nocto,
server:/data /srv/data-${x}
done

# create files across the namespace mounts but in same remote directory
for x in {1..2000}; do
    echo /srv/data-$((RANDOM %10))/dir1/touch.$x
done | xargs -n1 -P 100 -iX -t touch X 2>&1 | pv -l -a >|/dev/null

Doing this we get the same file create rate (32/s) as if we had used
10 individual clients.

I can only assume this is because of the independent slot table rpc queues?

But I have no idea why that also seems to effect the rate depending on
whether you use multiple remote directories or a single shared
directory.

So in summary:
* concurrent processes creating files in a single remote directory = slow
* concurrent processes creating files across many directories = fast
* concurrent clients creating files in a shared remote directory = fast
* concurrent namespaces creating files in a shared remote directory = fast

There is probably also some overlap with my previous queries around
parallel io/metadata performance:

https://marc.info/?t=160199739400001&r=2&w=4

Daire

On Sun, 23 Jan 2022 at 23:53, Daire Byrne <daire@xxxxxxxx> wrote:
>
> Hi,
>
> I've been experimenting a bit more with high latency NFSv4.2 (200ms).
> I've noticed a difference between the file creation rates when you
> have parallel processes running against a single client mount creating
> files in multiple directories compared to in one shared directory.
>
> If I start 100 processes on the same client creating unique files in a
> single shared directory (with 200ms latency), the rate of new file
> creates is limited to around 3 files per second. Something like this:
>
> # add latency to the client
> sudo tc qdisc replace dev eth0 root netem delay 200ms
>
> sudo mount -o vers=4.2,nocto,actimeo=3600 server:/data /tmp/data
> for x in {1..10000}; do
>     echo /tmp/data/dir1/touch.$x
> done | xargs -n1 -P 100 -iX -t touch X 2>&1 | pv -l -a > /dev/null
>
> It's a similar (slow) result for NFSv3. If we run it again just to
> update the existing files, it's a lot faster because of the
> nocto,actimeo and open file caching (32 files/s).
>
> Then if I switch it so that each process on the client creates
> hundreds of files in a unique directory per process, the aggregate
> file create rate increases to 32 per second. For NFSv3 it's 162
> aggregate new files per second. So much better parallelism is possible
> when the creates are spread across multiple remote directories on the
> same client.
>
> If I then take the slow 3 creates per second example again and instead
> use 10 client hosts (all with 200ms latency) and set them all creating
> in the same remote server directory, then we get 3 x 10 = 30 creates
> per second.
>
> So we can achieve some parallel file create performance in the same
> remote directory but just not from a single client running multiple
> processes. Which makes me think it's more of a client limitation
> rather than a server locking issue?
>
> My interest in this (as always) is because while having hundreds of
> processes creating files in the same directory might not be a common
> workload, it is if you are re-exporting a filesystem and multiple
> clients are creating new files for writing. For example a batch job
> creating files in a common output directory.
>
> Re-exporting is a useful way of caching mostly read heavy workloads
> but then performance suffers for these metadata heavy or writing
> workloads. The parallel performance (nfsd threads) with a single
> client mountpoint just can't compete with directly connected clients
> to the originating server.
>
> Does anyone have any idea what the specific bottlenecks are here for
> parallel file creates from a single client to a single directory?
>
> Cheers,
>
> Daire



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux