RE: Cluster service restarting Locally

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have iSCSI external storage server and on the cluster node we use
software initiator connect to iSCSI target. One way is to format that
iSCSI disk to ext3, another test is to format it to GFS filesystem. I
thought ext3 should be better than GFS, but the benchmark result shows
GFS is better. That's what we are testing for.



-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Erling Nygaard
Sent: Thursday, March 09, 2006 4:34 PM
To: linux clustering
Subject: Re:  Cluster service restarting Locally

oh, thats good to hear :-)
Multiple lock_nolock nodes would be... interesting...

However, you are saying you want to compare the performance of GFS
with the performance of iSCSI.
GFS is a filesystem, iSCSI is a block level device.
May I ask how you intend to "compare" the performance of the two?

Erling

On 3/9/06, Hong Zheng <hong.zheng@xxxxxxxxx> wrote:
> I understand no_lock won't work for multiple nodes, so I never mount
GFS
> w/ no_lock to multiple nodes, our cluster is two-node active-passive
> cluster. So every time only active node has GFS mount. I could use
iSCSI
> disk only, but just want to test if GFS has better performance than
> iSCSI.
>
> Hong
>
> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Erling Nygaard
> Sent: Thursday, March 09, 2006 3:52 PM
> To: linux clustering
> Subject: Re:  Cluster service restarting Locally
>
> I am sorry if this sounds a little harsh, but I'm not sure if laughing
> or crying is the correct reaction to this email.
>
> Let us get one thing straight.
> You are currently mounting a GFS filesystem _concurrently_ on multiple
> nodes using lock_nolock?
>
> If this is the case I can tell you that this will _not_ work. You
> _will_ corrupt your filesystem.
>
> Mounting a GFS filesystem with lock_nolock for all practical purposes
> turns the GFS filesystem into a local filesystem. There is _no_
> locking done anymore.
> With this setup there is no longer any coordination done among the
> nodes to control the filesystem access, so they are all going to step
> on each others toes.
> You might as well use ext3, the end result will be the same ;-)
>
> The purpose of lock_nolock is to (temporarily) be able to mount a GFS
> filesystem on a single node in such cases where the entire locking
> infrastructure is unavailable. (Something like a massive cluster
> failure)
>
> So you should really look into setting up one of the lock services :-)
>
> E.
>
>
>
>
>
>
> On 3/9/06, Hong Zheng <hong.zheng@xxxxxxxxx> wrote:
> > Lon,
> >
> > Thanks for your reply. In my system I don't use any lock system like
> > lock_gulm or lock_dlm, I use no_lock because our applications'
> > limitation. Do you think no_lock will also bring some lock traffic
or
> > not? When I tried lock_gulm before, our application had very bad
> > performance, so I choose no_lock.
> >
> > And I'm not sure which update we have right now. Do you know the
> > versions for clumanager and redhat-config-cluster of RHCS3U7?
> >
> > Hong
> >
> > -----Original Message-----
> > From: linux-cluster-bounces@xxxxxxxxxx
> > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Lon Hohberger
> > Sent: Wednesday, March 08, 2006 4:52 PM
> > To: linux clustering
> > Subject: RE:  Cluster service restarting Locally
> >
> > On Mon, 2006-03-06 at 14:02 -0600, Hong Zheng wrote:
> > > I'm having the same problem. My system configuration is as
follows:
> > >
> > > 2-node cluster: RH ES3, GFS6.0, clumanager-1.2.28-1 and
> > > redhat-config-cluster-1.0.8-1
> > >
> > > Kernel: 2.4.21-37.EL
> > >
> > > Linux-iscsi-3.6.3 initiator: connections to iSCSI shared storage
> > > server
> >
> > If it's not fixed in U7 (which I think it should be), please file a
> > bugzilla... It sounds like the lock traffic is getting
> network-starved.
> >
> > -- Lon
> >
> >
> > --
> > 
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> > --
> > 
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>
> --
> -
> Mac OS X. Because making Unix user-friendly is easier than debugging
> Windows
>
> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> 
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>


--
-
Mac OS X. Because making Unix user-friendly is easier than debugging
Windows

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux