Re: parallel file create rates (+high latency)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 1/25/22 16:41, Daire Byrne wrote:
On Tue, 25 Jan 2022 at 22:11, Patrick Goetz <pgoetz@xxxxxxxxxxxxxxx> wrote:

IDK, 4000 images per collection, with hundreds of collections on disk?
Say at least 500,000 files?  Maybe a million? With most files about 1GB
in size.  I was trying to just rsync it all from the data server to a
ZFS-based backup server in our data center, but the backup started
failing constantly because the filesystem would change after rsync had
already constructed an index. Even after an initial copy, a backup like
that runs for over a week.  The strategy I'm about to try and implement
is to NFS mount the data server's data partition to the backup server
and then have a script walk through the directory hierarchy, rsyncing
collections one at a time.  ZFS send/receive would probably be better,
but the data server isn't configured with ZFS.

We've strayed slightly off topic (even if we are talking about file
creates over NFS) because you can get good parallel performance
(creates, read, writes etc) over NFS with simultaneous copies using
lots of processes if distributed across lots of directories.

Well "good" being subjective. I get 1,500 creates/s in a single
directory on a LAN NFS server from a single client and 160 creates/s
aggregate over my extreme 200ms using 10 clients & 10 different
directories. It seems fair all things considered.

But seeing as I do a lot of these kinds of big data moves (TBs) across
both the LAN and WAN, I can perhaps offer some advice from experience
that might be useful:

* walk the filesystem (locally) first to build a file list, split it
and then use rsync --files-from (e.g. https://github.com/jbd/msrsync)
to feed multiple simultaneous rsyncs.
* avoid NFS and use rsyncd directly between the servers (no ssh) so
filesystem walks are "local".


Thanks for this suggestion! This option didn't even occur to me. The only downside is that this server gets really busy during image processing, so I'm a bit worried about loading it down with dozens of simultaneous rsync processes. Also, the biggest performance problem in this system (which includes multiple GPU-laden workstations and 2 other NFS servers) is always I/O bottlenecks. I suppose the solution is to nice all the rsync processes to 19.

Question: given that I usually run backups from cron, and given that they can take a long time, how does msrsync avoid stepping on itself?





The advantage of rsync is that it will do the filesystem walks at both
ends locally and compare the directory trees as it goes along. The
other nice thing it does is open a connection between sender and
receiver and stream all the file data down it so it works really well
even for lists of small files. The TCP connection and window scaling
can sit at it's maximum without any slow remote file metadata latency
disrupting it. Avoid the encapsulation of  sshand use rsyncd instead
as it just speeds everything up.

And as always with any WAN connection, large buffers, window scaling,
no firewall DPI and maybe some fancy congestion control like BBR/2
helps.

Daire



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux