Re: fcntl locking lockup (dlm 1.07, GFS 6.1.5, kernel 2.6.9-67.EL)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're using an MSA500 actually, so what you're saying is that we're not using the proper hardware for GFS.
Can you tell us how bad is this? The reason I'm asking is because we are already at the second version of our product using this solution and we did not have any issues before. So we never considered the hardware to be an issue.

When we picked this solution, HP presented MSA500 as being able to do concurrent access to files (of course there's some serialization inside, there's only one set of reading heads in the hard disk). Also, HP DL360 have the ILO interface, which is supported by GFS.

The difference now is that we are using file locking heavily and we're using files in multi-access mode. Everything seems to work fine, except for the locking.

Coman

Kevin Anderson <kanderso@xxxxxxxxxx> wrote:
On Tue, 2008-01-08 at 22:39 -0500, Charlie Brady wrote:
On Tue, 8 Jan 2008, Gordan Bobic wrote:  > Charlie Brady wrote: > > On Fri, 4 Jan 2008, Charlie Brady wrote: > >  > >> I'm helping a colleague to collect information on an application lockup  > >> problem on a two-node DLM/GFS cluster, with GFS on a shared SCSI array. > >> > >> I'd appreciate advice as to what information to collect next. > > 
 > > Nobody have any advice? >  > Shared SCSI as in iSCSI SAN or as in a shared SCSI bus with two machines  > connected via a SCSI cable?  The latter. I don't have the details immediately at hand, but it's all HP  gear. A pair of DL380s with an external SCSI array (MSAxx), IIRC.  
If it is a MSA20, MSA30 or MSA500 - they won't work with GFS.  Shared SCSI bus isn't really shared, accesses lock the bus such that when one node accesses the storage the other node is locked out.  GFS requires the ability to do shared concurrent access to the storage devices.  This probably explains the hangs you were seeing.  So, either get an iSCSI or fibre channel storage array, or go strictly with a failover storage architecture, such that only one node has the filesystem mounted at any one time.  In that case, you don't need gfs anymore, just cluster suite to manage the failover.

Kevin

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



Ask a question on any topic and get answers from real people. Go to Yahoo! Answers.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux