I'm trying to rsync a 8TB data folder containing squillions of small files and it's taking forever (i.e. weeks) to get anywhere. I'm assuming the slow bit is check-summing everything with a single CPU (even though it's on a 12-core server ;-( ) Is it possible to do something simple like scp the whole dir in one go so they're duplicates in the first instance, then get rsync to just keep them in sync without an initial transfer? Or is there a better way? Thanx, Russell Smithies Infrastructure Technician T 03 489 9085 M 027 4734 600 E russell.smithies@xxxxxxxxxxxxxxxx Invermay Agricultural Centre Puddle Alley, Private Bag 50034, Mosgiel 9053, New Zealand T +64 3 489 3809 F +64 3 489 3739 www.agresearch.co.nz<http://www.agresearch.co.nz/> ======================================================================= Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. ======================================================================= _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos