Send Linux-cluster mailing list submissions to
linux-cluster@xxxxxxxxxx <mailto:linux-cluster@xxxxxxxxxx>
To subscribe or unsubscribe via the World Wide Web, visit
https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body'help' to
linux-cluster-request@xxxxxxxxxx <mailto:linux-cluster-request@xxxxxxxxxx>
You can reach the person managing the list at
linux-cluster-owner@xxxxxxxxxx <mailto:linux-cluster-owner@xxxxxxxxxx>
When replying, please edit your Subject line so it is more specific
than"Re: Contents of Linux-cluster digest..."
Today's Topics:
1. Re: Linux-cluster Digest, Vol 83, Issue 13 (Sunil_Gupta2@xxxxxxxx <mailto:Sunil_Gupta2@xxxxxxxx>)
2. which is better gfs2 and ocfs2? (yue)
3. Re: which is better gfs2 and ocfs2? (Jeff Sturm)
4. Re: which is better gfs2 and ocfs2? (Michael Lackner)
5. Re: which is better gfs2 and ocfs2? (rhurst@xxxxxxxxxxxxxxxxx <mailto:rhurst@xxxxxxxxxxxxxxxxx>)
6. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and
gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Gregory Bartholomew)
7. Re: which is better gfs2 and ocfs2? (Thomas Sjolshagen)
8. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and
gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Andrew Beekhof)
----------------------------------------------------------------------
Message: 1
Date: Wed, 9 Mar 2011 17:44:17 +0530
From:<Sunil_Gupta2@xxxxxxxx> <mailto:Sunil_Gupta2@xxxxxxxx>
To:<linux-cluster@xxxxxxxxxx> <mailto:linux-cluster@xxxxxxxxxx>
Subject: Re: Linux-cluster Digest, Vol 83, Issue 13
Message-ID:
<8EF1FE59C3C8694E94F558EB27E464B71D130C752D@xxxxxxxxxxxxxxxxxxxxxxxxxx> <mailto:8EF1FE59C3C8694E94F558EB27E464B71D130C752D@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"
One node is offline cluster is not formed....check if multicast traffic is working...
--Sunil
From:linux-cluster-bounces@xxxxxxxxxx <mailto:linux-cluster-bounces@xxxxxxxxxx> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Balaji
Sent: Wednesday, March 09, 2011 4:54 PM
To:linux-cluster@xxxxxxxxxx <mailto:linux-cluster@xxxxxxxxxx>
Subject: Re: Linux-cluster Digest, Vol 83, Issue 13
Dear All,
Please find attached log file for more analysis
Please help me to solve this problem ASAP.
Clustat Command Output is below
[root@corviewprimary ~]# clustat
Cluster Status for EMSCluster @ Wed Mar 9 17:00:03 2011
Member Status: Quorate
Member Name ID Status
----------- ------- ---- ------
corviewprimary 1 Online, Local
corviewsecondary 2 Offline
[root@corviewprimary ~]#
Regards,
-S.Balaji
linux-cluster-request@xxxxxxxxxx <mailto:linux-cluster-request@xxxxxxxxxx><mailto:linux-cluster-request@xxxxxxxxxx> wrote:
Send Linux-cluster mailaddr:115.249.107.179ing list submissions to
linux-cluster@xxxxxxxxxx <mailto:linux-cluster@xxxxxxxxxx><mailto:linux-cluster@xxxxxxxxxx>
To subscribe or unsubscribe via the World Wide Web, visit
https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body'help' to
linux-cluster-request@xxxxxxxxxx <mailto:linux-cluster-request@xxxxxxxxxx><mailto:linux-cluster-request@xxxxxxxxxx>
You can reach the person managing the list at
linux-cluster-owner@xxxxxxxxxx <mailto:linux-cluster-owner@xxxxxxxxxx><mailto:linux-cluster-owner@xxxxxxxxxx>
When replying, please edit your Subject line so it is more specific
than"Re: Contents of Linux-cluster digest..."
Today's Topics:
1. Re: clvmd hangs on startup (Valeriu Mutu)
2. Re: clvmd hangs on startup (Jeff Sturm)
3. dlm-pcmk-3.0.17-1.fc14.x86_64 and
gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Gregory Bartholomew)
4. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and
gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Fabio M. Di Nitto)
5. Re: unable to live migrate a vm in rh el 6: Migration
unexpectedly failed (Lon Hohberger)
6. Re: rgmanager not running (Sunil_Gupta2@xxxxxxxx <mailto:Sunil_Gupta2@xxxxxxxx><mailto:Sunil_Gupta2@xxxxxxxx>)
7. Re: unable to live migrate a vm in rh el 6: Migration
unexpectedly failed (Gianluca Cecchi)
8. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and
gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Andrew Beekhof)
9. Re: unable to live migrate a vm in rh el 6: Migration
unexpectedly failed (Gianluca Cecchi)
10. Re: unable to live migrate a vm in rh el 6: Migration
unexpectedly failed (Gianluca Cecchi)
----------------------------------------------------------------------
Message: 1
Date: Tue, 8 Mar 2011 12:11:53 -0500
From: Valeriu Mutu<vmutu@xxxxxxxxxxxxxx> <mailto:vmutu@xxxxxxxxxxxxxx><mailto:vmutu@xxxxxxxxxxxxxx>
To: linux clustering<linux-cluster@xxxxxxxxxx> <mailto:linux-cluster@xxxxxxxxxx><mailto:linux-cluster@xxxxxxxxxx>
Subject: Re: clvmd hangs on startup
Message-ID:<20110308171153.GB272@xxxxxxxxxxxxxxxxxxxxx> <mailto:20110308171153.GB272@xxxxxxxxxxxxxxxxxxxxx><mailto:20110308171153.GB272@xxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii
Hi,
I think the problem is solved. I was using a 9000bytes MTU on the Xen virtual machines' iSCSI interface. Switching back to 1500bytes MTU caused the clvmd to start working.
On Thu, Mar 03, 2011 at 11:50:57AM -0500, Valeriu Mutu wrote:
On Wed, Mar 02, 2011 at 05:36:45PM -0500, Jeff Sturm wrote:
Double-check that the 2nd node can read and write the shared iSCSI
storage.
Reading/writing from/to the iSCSI storage device works as seen below.
On the 1st node:
[root@vm1 cluster]# dd count=10000 bs=1024 if=/dev/urandom of=/dev/mapper/pcbi-homes
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 3.39855 seconds, 3.0 MB/s
[root@vm1 cluster]# dd count=10000 bs=1024 if=/dev/mapper/pcbi-homes of=/dev/null
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 0.331069 seconds, 30.9 MB/s
On the 2nd node:
[root@vm2 ~]# dd count=10000 bs=1024 if=/dev/urandom of=/dev/mapper/pcbi-homes
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 3.2465 seconds, 3.2 MB/s
[root@vm2 ~]# dd count=10000 bs=1024 if=/dev/mapper/pcbi-homes of=/dev/null
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 0.223337 seconds, 45.8 MB/s
-------------- next part --------------
An HTML attachment was scrubbed...
URL:<https://www.redhat.com/archives/linux-cluster/attachments/20110309/dc0cbf73/attachment.html>
------------------------------
Message: 2
Date: Wed, 9 Mar 2011 22:13:35 +0800 (CST)
From: yue<ooolinux@xxxxxxx> <mailto:ooolinux@xxxxxxx>
To: linux-cluster<linux-cluster@xxxxxxxxxx> <mailto:linux-cluster@xxxxxxxxxx>
Subject: which is better gfs2 and ocfs2?
Message-ID:<4f996c7c.1356a.12e9af733aa.Coremail.ooolinux@xxxxxxx> <mailto:4f996c7c.1356a.12e9af733aa.Coremail.ooolinux@xxxxxxx>
Content-Type: text/plain; charset="gbk"
which is better gfs2 and ocfs2?
i want to share fc-san, do you know which is better?
stablility,performmance?
thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:<https://www.redhat.com/archives/linux-cluster/attachments/20110309/603fcadf/attachment.html>
------------------------------
Message: 3
Date: Wed, 9 Mar 2011 09:48:03 -0500
From: Jeff Sturm<jeff.sturm@xxxxxxxxxx> <mailto:jeff.sturm@xxxxxxxxxx>
To: linux clustering<linux-cluster@xxxxxxxxxx> <mailto:linux-cluster@xxxxxxxxxx>
Subject: Re: which is better gfs2 and ocfs2?
Message-ID:
<64D0546C5EBBD147B75DE133D798665F0855C34D@xxxxxxxxxxxxxxxxx> <mailto:64D0546C5EBBD147B75DE133D798665F0855C34D@xxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"
Do you expect to get an objective answer to that from a Red Hat list?
Most users on this forum are familiar with GFS2, some may have tried
OCFS2 but there's bound to be a bias.
GFS has been extremely stable for us (haven't migrated to GFS2 yet, went
into production with GFS in 2008). Just last night in fact a single
hardware node failed in one of our virtual test clusters, the fencing
operations were successful and everything recovered nicely. The cluster
never lost quorum and disruption was minimal.
Performance is highly variable depending on the software application.
We have developed our own application which gave us freedom to tailor it
for GFS, improving performance and throughput significantly.
Regardless of what you hear, why not give both a try? Your evaluation
and feedback would be very useful to the cluster community.
-Jeff
From:linux-cluster-bounces@xxxxxxxxxx <mailto:linux-cluster-bounces@xxxxxxxxxx>
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of yue
Sent: Wednesday, March 09, 2011 9:14 AM
To: linux-cluster
Subject: which is better gfs2 and ocfs2?
which is better gfs2 and ocfs2?
i want to share fc-san, do you know which is better?
stablility,performmance?
thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:<https://www.redhat.com/archives/linux-cluster/attachments/20110309/492d14bd/attachment.html>
------------------------------
Message: 4
Date: Wed, 09 Mar 2011 15:53:40 +0100
From: Michael Lackner<michael.lackner@xxxxxxxxxxxxxxx> <mailto:michael.lackner@xxxxxxxxxxxxxxx>
To: linux clustering<linux-cluster@xxxxxxxxxx> <mailto:linux-cluster@xxxxxxxxxx>
Subject: Re: which is better gfs2 and ocfs2?
Message-ID:<4D779474.6020509@xxxxxxxxxxxxxxx> <mailto:4D779474.6020509@xxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8; format=flowed
I guess not all usage scenarios are comparable, but I once
tried to use GFS2 as well as OCFS2 to share a FC SAN to three
nodes using 8GBit FC and 1GBit Ethernet for the cluster
communication. Additionally, i compared it to a trial version
of Dataplows SAN File System (SFS). I was also supposed to
compare it to Quantum StorNext, but there just wasn't enough
time for that.
OS was CentOS 5.3 at that time.
So I tried a lot of performance tuning settings for all three,
and it was like this:
1.) SFS was the fastest, but caused reproducible kernel panics.
Those were fixed by Dataplow, but then SFS produced corrupted data
when writing large files. Unusable in that state, so we gave up.
SFS uses NFS for lock management. Noteworthy: Writing data on the
machine with the NFS lock manager also crippled the I/O performance
for all the other nodes in a VERY, VERY bad way..
2.) GFS2 was the slowest, and despite all the tunings I tried, it
never came close to anything that any local FS would provide in
terms of speed (compared to EXT3 and XFS). The statfs() calls
pretty much crippled the FS. Multiple I/O streams on multiple nodes:
Not a good idea it seems.. Sometimes you have to wait for minutes
for the FS to just give you any feedback, when you're hammering
it with let's say 30 sequential write streams across 3 nodes, with
the streams equally distributed among them.
3.) OCFS2 was slightly faster than GFS2, especially when it came
to statfs(), like ls -l. It did not slow down that much. But overall,
it was still just far too slow.
Our solution: Hook up the SAN on one node only, and share via NFS
over GBit Ethernet. Overall, we are getting better results even
with the obvious network overhead, especially when doing a lot of
I/O on multiple clients.
Our original goal was to provide a high-speed centralized storage
solution for multiple nodes without having to use ethernet. This
failed completely unfortunately.
Hope this helps, it's just my experience though. As usual, mileage
may vary...
yue wrote:
which is better gfs2 and ocfs2?
i want to share fc-san, do you know which is better?
stablility,performmance?