Re: XFS on RBD on EC painfully slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 27, 2021 at 02:54:00PM -0500, Reed Dier wrote:
> Hoping someone may be able to help point out where my bottleneck(s) may be.
> 
> I have an 80TB kRBD image on an EC8:2 pool, with an XFS filesystem on top of that.
> This was not an ideal scenario, rather it was a rescue mission to dump a large, aging raid array before it was too late, so I'm working with the hand I was dealt.
> 
> To further conflate the issues, the main directory structure consists of lots and lots of small file sizes, and deep directories.
> 
> My goal is to try and rsync (or otherwise) data from the RBD to cephfs, but its just unbearably slow and will take ~150 days to transfer ~35TB, which is far from ideal.

(Disclaimer: no experience with cephfs)

I found rsync a wonderful tool for long distances and large files, less
so for local networks and small files, even with local disks.

Usually I do something like

( cd src/ && tar --acls --xattrs --numeric-owner --sparse -cf - . ) | 
  pv -pterab |
  (cd dst/ && tar --acls --xattrs --numeric-owner --sparse -xf -)

If src and dst are not mounted on the same machine you can use
netcat/socat to stream the tar from one system to the other, or pipe it
through ssh if you need encrypted transport.

This does not have the resume capability of rsync, but for small files
it is much faster. After that you can still throw in a final rsync for
changes accumulated while the initial transfer was running.

Matthias
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux