Hi, On Fri, Nov 4, 2011 at 7:05 PM, Nicolas Ross <rossnick-lists@xxxxxxxxxxx> wrote: > On some services, there are document directories that are huge, not that > much in size (about 35 gigs), but in number of files, around one million. > One service even has 3 data directories with that many files each. > > It works pretty well for now, but when it comes to data update (via rsync) > and backup (also via rsync), the node doing the rsync crawls to a stop, all > 16 logical cores are used at 100% system, and it sometimes freezes the file > system for other services on other nodes. I have a 600GB GFS2 FS, and I resolved the issue with rsync that I run ionice -c3 rsync -av ... That way rsync is given the CPU for IO, if all other processes don't require IO. Of course it takes a lot of time to compete the sync, but if the time is not an issue, it can be a solution. -- Regards, Bohdan -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster