On Jun 29, 2018, at 8:37 PM, Steve French <smfrench@xxxxxxxxx> wrote: > > I have been looking at i/o patterns from various copy tools on Linux, > and it is pretty discouraging - I am hoping that I am forgetting an > important one that someone can point me to ... > > Some general problems: > 1) if source and target on the same file system it would be nice to > call the copy_file_range syscall (AFAIK only test tools call that), > although in some cases at least cp can do it for --reflink > 2) if source and target on different file systems there are multiple problems > a) smaller i/o (rsync e.g. maxes at 128K!) > b) no async parallelized writes sent down to the kernel so writes > get serialized (either through page cache, or some fs offer option to > disable it - but it still is one thread at a time) > c) sparse file support is mediocre (although cp has some support > for it, and can call fiemap in some cases) > d) for file systems that prefer setting the file size first (to > avoid metadata penalties with multiple extending writes) - AFAIK only > rsync offers that, but rsync is one of the slowest tools otherwise > > I have looked at cp, dd, scp, rsync, gio, gcp ... are there others? In the HPC world, there are a number of parallel copy tools, like MPI fileutils (https://github.com/hpc/mpifileutils) which can copy single files in parallel, as well as whole directory trees. As the name implies, it uses MPI for running on multiple nodes, but it may be possible to modify the tools to run multi-threaded on a single node as well. > What I am looking for (and maybe we just need to patch cp and rsync > etc.) is more like what you see with other OS ... > 1) options for large i/o sizes (network latencies in network/cluster > fs can be large, so prefer larger 1M or 8M in some cases I/Os) Lustre reports a blocksize of 2MB for stat(), which cp(1) uses. It probably wouldn't be a bad idea to have it use at least 1MB by default, instead of only e.g. 4KB for local filesystems. > 2) parallelizing writes so not just one write in flight at a time > 3) options to turn off the page cache (large number of large file > copies are not going to benefit from reuse of pages in the page cache > so going through the page cache may be suboptimal in that case) Writing to cache is usually faster than O_DIRECT, until you have dozens of threads doing parallel writes, or you are using AIO. Instead of using O_DIRECT, using posix_fadvise(POSIX_FADV_DONTNEED) to drop the source file from cache after it is finished is reasonable. Dropping the target file after some delay (when copying many files) would also be useful. > 4) option to set the file size first, and then fill in writes (so > non-extending writes) What about using fallocate()? > 5) sparse file support > (and it would also be nice to support copy_file_range syscall ... but > that is unrelated to the above) > > Am I missing some magic tool? Seems like Windows has various options > for copy tools - but looking at Linux i/o patterns from these tools > was pretty depressing - I am hoping that there are other choices. Cheers, Andreas
Attachment:
signature.asc
Description: Message signed with OpenPGP