Network/Load Average issues when running Dump of root filesystem.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





I currently run a dump of our / filesystem to a network drive incase of a
server failure i.e. disk issues. I cannot run the dump to the disk locally
as we do not have enough space to do so, this is kicked off via cron at
10.15pm. When this is run the load average goes through the roof as you
would expect and can hit 50.00 sometimes and the mounted network drive that
the dump is writing too is slowed down to point where our other linux
machines report /software not responding for a few minutes a night. Does
anyone know a way i can get this dump file to work better without putting
so much load on our network and the machine itself. Is there a better way
of doing this? Any advice would be great. Below is the load as per the top
command and the dump command run via a script.


01/24/08, 00:00:01 up 2 days, 13:12, 5 users, load average: 44.58, 40.13,
35.87

/sbin/dump 0uf /software/grid_dump/dump.$DAY /

Regards

Andrew Bridgeman

**********************************************************************
This transmission is confidential and must not be used or disclosed by
anyone other than the intended recipient. Neither Corus Group Limited nor
any of its subsidiaries can accept any responsibility for any use or
misuse of the transmission by anyone.

For address and company registration details of certain entities
within the Corus group of companies, please visit
http://www.corusgroup.com/entities

**********************************************************************

-- 
redhat-list mailing list
unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe
https://www.redhat.com/mailman/listinfo/redhat-list

[Index of Archives]     [CentOS]     [Kernel Development]     [PAM]     [Fedora Users]     [Red Hat Development]     [Big List of Linux Books]     [Linux Admin]     [Gimp]     [Asterisk PBX]     [Yosemite News]     [Red Hat Crash Utility]


  Powered by Linux