A> How many partitions can be mount using cluster server RHEL 5.4 (GFS2)
B> And how many VIP can use with the same
Regards,
Rajat J Patel
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...
On Mon, Mar 22, 2010 at 9:30 PM, <linux-cluster-request@xxxxxxxxxx> wrote:
Send Linux-cluster mailing list submissions to
linux-cluster@xxxxxxxxxx
To subscribe or unsubscribe via the World Wide Web, visit
https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
linux-cluster-request@xxxxxxxxxx
You can reach the person managing the list at
linux-cluster-owner@xxxxxxxxxx
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."
Today's Topics:
1. Re: RHCS: How to display with command line HA resources
attached to service (Paul Morgan)
2. Re: RHCS: How to display with command line HA resources
attached to service (Moralejo, Alfredo)
3. Re: falure during gfs2_grow caused node crash & data loss
(Bob Peterson)
4. Re: GFS create file performance (Jeff Sturm)
----------------------------------------------------------------------
Message: 1
Date: Sun, 21 Mar 2010 16:39:26 -0400
From: Paul Morgan <jumanjiman@xxxxxxxxx>
To: linux clustering <linux-cluster@xxxxxxxxxx>
Subject: Re: RHCS: How to display with command line HA
resources attached to service
Message-ID:
<1b6fc7dd1003211339s4e0e901ax9fc1a539ca8af646@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
Try cman_tool services
On Mar 21, 2010 11:32 AM, "Hoang, Alain" <Alain.Hoang@xxxxxx> wrote:
Hello,
With the command, clustat ?l ?s <Service>, I could get some information, but
I can not display:
- List of HA resources attached
- Failover Domain with its members
Is there any other command that could give me the information?
Best Regards,
Ki?n L?m Alain Hoang,
(??? ??? , ?? ?)
Technical Consultant, Factory consulting and training
HP Software and Solutions
Communications & Media Solutions
NGOSS Practice Delivery Management
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.redhat.com/archives/linux-cluster/attachments/20100321/6a566c57/attachment.html>
------------------------------
Message: 2
Date: Mon, 22 Mar 2010 10:48:38 +0100
From: "Moralejo, Alfredo" <alfredo.moralejo@xxxxxxxxx>
To: linux clustering <linux-cluster@xxxxxxxxxx>
Subject: Re: RHCS: How to display with command line HA
resources attached to service
Message-ID:
<C64734E4E1C80E49955AD539DB2FBC3A69330DBF@xxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="utf-8"
You can try rg_test command too. This is the one that provides more information
________________________________
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Paul Morgan
Sent: Sunday, March 21, 2010 9:39 PM
To: linux clustering
Subject: Re: RHCS: How to display with command line HA resources attached to service
Try cman_tool services
On Mar 21, 2010 11:32 AM, "Hoang, Alain" <Alain.Hoang@xxxxxx<mailto:Alain.Hoang@xxxxxx>> wrote:
Hello,
With the command, clustat ?l ?s <Service>, I could get some information, but I can not display:
* List of HA resources attached
* Failover Domain with its members
Is there any other command that could give me the information?
Best Regards,
Ki?n L?m Alain Hoang,
(??? ??? , ?? ?)
Technical Consultant, Factory consulting and training
HP Software and Solutions
Communications & Media Solutions
NGOSS Practice Delivery Management
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx<mailto:Linux-cluster@xxxxxxxxxx>
https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.redhat.com/archives/linux-cluster/attachments/20100322/678245f8/attachment.html>
------------------------------
Message: 3
Date: Mon, 22 Mar 2010 09:52:21 -0400 (EDT)
From: Bob Peterson <rpeterso@xxxxxxxxxx>
To: bergman@xxxxxxxxxxxx, linux clustering <linux-cluster@xxxxxxxxxx>
Subject: Re: falure during gfs2_grow caused node crash
& data loss
Message-ID:
<1296793451.662271269265941932.JavaMail.root@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=utf-8
----- bergman@xxxxxxxxxxxx wrote:
| I just had a serious problem with gfs2_grow which caused a loss of
| data and a
| cluster node reboot.
|
| I was attempting to grow a gfs2 volume from 50GB => 145GB. The volume
| was
| mounted on both cluster nodes at the start of running "gfs2_grow".
| When I
| umounted the volume from _one_ node (not where gfs2_grow was running),
| the
| macine running gfs2_grow rebooted and the filesystem is damaged.
|
| The sequence of commands was as follows. Each command was successful
| until the
| "umount".
(snip)
| Mark
Hi Mark,
There's a good chance this was caused by bugzilla bug #546683 which
is scheduled to be released in 5.5. However, I've also seen some
problems like this when a logical volume in LVM isn't marked as
clustered. Make sure it is with the "vgs" command (check if the flags
end with a "c") and if not, do vgchange -cy <volgrp>
As for fsck.gfs2, it should never segfault. IMHO, this is a bug
so please open a bugzilla record: Product: "Red Hat Enterprise Linux 5"
and component "gfs2-utils". Assign it to me.
As for recovering your volume, you can try this but it's not guaranteed
to work:
(1) Reduce the volume to its size from before the gfs2_grow.
(2) Mount it from one node only, if you can (it may crash).
(3) If it lets you mount it, run gfs2_grow again.
(4) Unmount the volume.
(5) Mount the volume from both nodes.
If that doesn't work or if the system can't properly mount the volume
your choices are either (1) reformat the volume and restore from
backup, (2) Use gfs2_edit to patch the i_size field of the rindex file
to be a fairly small multiple of 96 then repeat steps 1 through 4.
Regards,
Bob Peterson
Red Hat File Systems
------------------------------
Message: 4
Date: Mon, 22 Mar 2010 10:08:17 -0400
From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
To: "linux clustering" <linux-cluster@xxxxxxxxxx>
Subject: Re: GFS create file performance
Message-ID:
<64D0546C5EBBD147B75DE133D798665F055D8C5C@xxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"
> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx]
> On Behalf Of C. Handel
> Sent: Friday, March 19, 2010 5:43 PM
> To: linux-cluster@xxxxxxxxxx
> Subject: Re: GFS create file performance
>
> Is your session data valuable? what happens if you loose it? For web
> application this normally means, that users need to login again.
It varies. Our "session" mechanism is used for a variety of purposes,
some very short lived, others that may persist for weeks.
In some cases the loss of this data will force the user to login again,
as you say. In other examples a link that we send in an email may
become invalid.
We may decide eventually to adopt different storage backends for
short-lived session data, or transient vs. persistent data.
> How big is your data? What is the read/write ratio?
We have a 50GB GFS filesystem right now. Reads/writes are close to 1:1.
> You could go for a memcache. Try two dedicated machines with lots of
> memory. Write your session storage to always write to both and read
> from one. Handle failure in software. Unbeatable performance. will
> saturate gigbit links with ease.
Yup, we're aware of this and other storage alternatives. I wanted to
ask about it on the linux-cluster list to make sure we didn't overlook
anything regarding GFS. I'm also curious to know what the present
limitations of GFS are.
We actually use GFS for several purposes. One of those is to
synchronize web content--we used to run an elaborate system of rsync
processes to keep all content distributed over all nodes. We've
replaced the use of rsync with a GFS filesystem (two master nodes, many
spectator nodes). This is working well.
We also use GFS to distribute certain user-contributed content, such as
images or video. This is a read-write filesystem mounted on all cluster
nodes. GFS works well for this too.
Our only controversial use of GFS at the moment is the session data due
to the frequency of create/write/read/unlink that we need to support.
Following Steven Whitehouse's great explanation last week of inode
creation, resource groups and extended attributes, we tried disabling
selinux on certain cluster nodes. Surprisingly, I've seen a reduction
of block I/O as high as 30-40% resulting from this.
-Jeff
------------------------------
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
End of Linux-cluster Digest, Vol 71, Issue 37
*********************************************
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster