Re: Timeout causing GFS filesystem inaccessibility

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 09, 2005 at 08:33:34AM -0400, Kovacs, Corey J. wrote:
> I've seen the same behavior with slocate.cron doing it's thing. I had to add
> gfs to it's filesystems exclude list. My setup is as follows ...
> 
> 3 HP-DL380-G3's
> 1 MSA1000
> 6 FC2214 (QL2340) FC cards
> 
> The three nodes are set up as lock managers and a 1TB fs created. When
> populating the file system using scp, or rsync etc from another
> machine with approx 400GB worth of 50k files, the target machine would
> become unresponsive. This lead to me moving to the latest version
> available at the time (6.0.2.20-1) and setting up alternate nics for
> lock_gulmd to use, which seems to have help tremendously.
> 
> That said, after a the first successful complete data transfer on this
> cluster I went to do a 'du -sh' on the mount point and the machine got
> to a state where it would refuse to fork. Which is exactly what the
> problem was with slocate.cron.

that's not good.  Can you do an gulm_tool getstats <masterserver>:lt000 ?
Just want to see how full the queues are.

-- 
Michael Conrad Tadpol Tilstra
At night as I lay in bed looking at the stars I thought 'Where the hell is
the ceiling?' 

Attachment: pgpmm4FnpHRre.pgp
Description: PGP signature

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux