Question :)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Question :)

First - thanks for the help the last time I poked my pointy little head in here.
Things have been -much- more stable since we bumped the lock limit to 2097152 ;)

However, we're still running into the occasional "glitch" where it seems like a single process is locking up -all- disk access on us, until it completes its operation.
Specifically, we see this when folks are doing rsyncs of large amounts of data (one of my faculty has been trying to copy over a couple thousand 16MB files). Even piping tar through ssh (from target machine, ssh user@host "cd /data/dir/path; tar -cpsf -" | tar -xpsf -) results in similar behaviour.
Is this tunable, or simply a fact of life that we're simply going to have to live with? it only occurs with big, or long, writes. Reads aren't a problem (it just takes 14 hours to dump 1.5TB to tape...)

Thanks!

--
Jerry Gilyeat, RHCE
Systems Administrator
Molecular Microbiology and Immunology
Johns Hopkins Bloomberg School of Public Health

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux