Re: [External Email] Re: XFS on RBD on EC painfully slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Providing an overdue update to wrap this thread up.

It turns out I wasn't seeing the forest for the trees.
Parallelizing the copy did in fact yield much larger results than the single threaded copies.

In the end we used a home-brewed python script to parallelize the copy using cp, rather than rsync, to copy things in batches, which took about 48 hours to copy ~35TiB from the RBD to cephfs.
I think it could have gone a bit faster, however in an effort to keep the RBD NFS export somewhat usable with respect to latency and iops.

So all in all, when in doubt, parallelize.
Otherwise, you're stuck with painfully slow single threaded performance.

Reed

> On May 30, 2021, at 6:32 AM, Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote:
> 
> Reed,
> 
> I'd like to add to Sebastian's comments - the problem is probably rsync.
> 
> I inherited a smaller setup than you when I assumed my current responsibilities - an XFS file system on a RAID and exported over NFS.  The backup process is based on RSnapshot, which is based on rsync over SSH, but the target is another XFS on hardware RAID.   The file system contains a couple thousand user home directories for Computer Science students, so wide and deep and lots of small files.
> 
> I was tormented by a daily backup process that was taking 3 days to run - copying the entire file system in a single rsync.  What I ultimately concluded is that rsync runs exponentially slower as the size of the file tree to be copied increases.  To get around this, if you play some games with find and GNU parallel, you can break your file system into many small sub-trees and run many rsyncs in parallel.  Copying this way, you will see amazing throughput.  
> 
> From the point of view of a programmer, I think that rsync must build a representation of the source and destination file trees in memory and then must traverse and re-traverse them to make sure everything got copies and that nothing changed in the source tree.  I've never read the code, but I've see articles that confirm my theory.
> 
> In my case, because of inflexibility in RSnapshot I have ended up with 26 consecutive rsyncs - a*, b*, c*, etc.  and I still go about twice as fast as I would with one large rsync.  However, when I transferred this file system to a new NFS server and new storage I was able to directly rsync each user in parallel.  I filled up a 10GB pipe and copied the whole FS in an hour.
> 
> Typing in a hurry.  If my explanation is confusing, please don't hesitate to ask me to explain better.  
> 
> -Dave
> 
> --
> Dave Hall
> Binghamton University
> kdhall@xxxxxxxxxxxxxx <mailto:kdhall@xxxxxxxxxxxxxx>
> 
> 
> On Fri, May 28, 2021 at 11:12 AM Sebastian Knust <sknust@xxxxxxxxxxxxxxxxxxxxxxx <mailto:sknust@xxxxxxxxxxxxxxxxxxxxxxx>> wrote:
> Hi Reed,
> 
> To add to this command by Weiwen:
> 
> On 28.05.21 13:03, 胡 玮文 wrote:
> > Have you tried just start multiple rsync process simultaneously to transfer different directories? Distributed system like ceph often benefits from more parallelism.
> 
> When I migrated from XFS on iSCSI (legacy system, no Ceph) to CephFS a 
> few months ago, I used msrsync [1] and was quite happy with the speed. 
> For your use case, I would start with -p 12 but might experiment with up 
> to -p 24 (as you only have 6C/12T in your CPU). With many small files, 
> you also might want to increase -s from the default 1000.
> 
> Note that msrsync does not work with the --delete rsync flag. As I was 
> syncing a live system, I ended up with this workflow:
> 
> - Initial sync with msrsync (something like ./msrsync -p 12 --progress 
> --stats --rsync "-aS --numeric-ids" ...)
> - Second sync with msrsync (to sync changes during the first sync)
> - Take old storage off-line for users / read-only
> - Final rsync with --delete (i.e. rsync -aS --numeric-ids --delete ...)
> - Mount cephfs at location of old storage, adjust /etc/exports with fsid 
> entries where necessary, turn system back on-line / read-write
> 
> Cheers
> Sebastian
> 
> [1] https://github.com/jbd/msrsync <https://github.com/jbd/msrsync>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux