Restart lvm with out dismounting LUN/FS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Restart LVM with out dismounting LUN/FS. Here is what u have to do.

#killall clvmd
#/usr/sbin/clvmd 

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of
linux-cluster-request@xxxxxxxxxx
Sent: Thursday, January 15, 2009 12:00 PM
To: linux-cluster@xxxxxxxxxx
Subject: Linux-cluster Digest, Vol 57, Issue 14

Send Linux-cluster mailing list submissions to
	linux-cluster@xxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
	https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
	linux-cluster-request@xxxxxxxxxx

You can reach the person managing the list at
	linux-cluster-owner@xxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."


Today's Topics:

   1. Create Logical Volume  (Cluster Management)
   2. Re: Re: Fencing test (Paras pradhan)
   3. Documentation enhancement requests/patches? (denis)
   4. Re: List Cluster Resources (denis)
   5. "Simple" Managed NFS setup (denis)
   6. cman-2.0.98-1.el5 / question about a problem when	launching
      cman (Alain.Moulle)
   7. Re: Documentation enhancement requests/patches? (Bob Peterson)
   8. Re: cman-2.0.98-1.el5 / question about a problem	when
      launching cman (Chrissie Caulfield)
   9. Re: [Openais] cman in RHEL 5 cluster suite and	Openais
      (Chrissie Caulfield)
  10. GFS/clvmd question (Gary Romo)


----------------------------------------------------------------------

Message: 1
Date: Wed, 14 Jan 2009 19:29:23 +0100
From: "Cluster Management" <cluster@xxxxxxxx>
Subject:  Create Logical Volume 
To: <linux-cluster@xxxxxxxxxx>
Message-ID: <008201c97676$05c36600$114a3200$@it>
Content-Type: text/plain; charset="us-ascii"

Hi all,

 

i have a two_node cluster RHEL 5 and an external ISCSI storage. I use
XEN
for virtualizzation purpose. When i create a new LUN in my storage i use
hot_add command to discover it from nodes.

The problem is that i have to restart clvmd to be able to create a new
Logical Volume. This operation is very critical because i have to stop
or
migrate each VM running on the node ed i have to umout their own LUN.

Is there a way to update clvmd without restarting?

 

Thanks a lot,

--

Francesco Gallo

XiNet S.r.L.

gallo (at) xinet (dot) it

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
https://www.redhat.com/archives/linux-cluster/attachments/20090114/e827f
01f/attachment.html

------------------------------

Message: 2
Date: Wed, 14 Jan 2009 13:48:58 -0600
From: "Paras pradhan" <pradhanparas@xxxxxxxxx>
Subject: Re:  Re: Fencing test
To: "linux clustering" <linux-cluster@xxxxxxxxxx>
Message-ID:
	<8b711df40901141148k740cf738ha9e43f0222b0a4ce@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Jan 8, 2009 at 10:57 PM, Rajagopal Swaminathan
<raju.rajsand@xxxxxxxxx> wrote:
> Greetings,
>
> On Fri, Jan 9, 2009 at 12:09 AM, Paras pradhan
<pradhanparas@xxxxxxxxx> wrote:
>>
>>
>> In an act to solve my fencing issue in my 2 node cluster, i tried to
>> run fence_ipmi to check if fencing is working or not. I need to know
>> what is my problem
>>
>> -
>> [root@ha1lx ~]# fence_ipmilan -a 10.42.21.28 -o off -l admin -p admin
>> Powering off machine @ IPMI:10.42.21.28...ipmilan: Failed to connect
>> after 30 seconds
>> Failed
>> [root@ha1lx ~]#
>> ---------------
>>
>>
>> Here 10.42.21.28 is an IP address assigned to IPMI interface and I am
>> running this command in the same host.
>>
>
> Sorry couldn't respond earlier as I do this on personal time (which as
> useual limited for us IT guys and gals ;-) ) and not during work per
> se..
>
> Do not run fence script from the node that you want to fence.
>
> Let us say you want to fence node 3.
> 1. Try pinging the node 3's IPMI from node 4. I should be successful
> 2. Issue the fence command from Node 4 with IP of Node 3 IPMI as
argument .
>
>
> HTH
>
> With warm regards
>
> Rajagopal
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

Yes as you said, I am able to power down node4 using node3, so it
seems ipmi is working fine. But I dunno what is going on with my two
node cluster. Can a red hat cluster operates fine in a two nodes mode?
Do i need qdisk or it is optional. Which area do i need to focus to
run my 2 nodes red hat cluster using ipmi as fencing device.

Thanks
Paras.



------------------------------

Message: 3
Date: Thu, 15 Jan 2009 10:49:11 +0100
From: denis <denisb+gmane@xxxxxxxxx>
Subject:  Documentation enhancement requests/patches?
To: linux-cluster@xxxxxxxxxx
Message-ID: <gkn0qn$n57$1@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

After getting to know the "Configuring and Managing a Red Hat Cluster"
documentation [1] fairly well, I have a few enhancement suggestions.
What is the best way to submit these?

[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Clust
er_Administration/

Regards
--
Denis Braekhus



------------------------------

Message: 4
Date: Thu, 15 Jan 2009 10:52:00 +0100
From: denis <denisb+gmane@xxxxxxxxx>
Subject:  Re: List Cluster Resources
To: linux-cluster@xxxxxxxxxx
Message-ID: <gkn100$n57$2@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Chaitanya Kulkarni wrote:
> Hi All,
> 
> I am new to the RHEL Clusters. Is there any way, (other than the
> cluster.conf file) using which we can view / list all the Cluster
> Resources that are used under the Cluster Service (Resource Group)?
Some
> command which might give some output as -
> 
> Service Name = Service1
> 
> Resources -
> IP Address = <Value>
> File System = <Value>
> Script = <Value>

Hi Chaitanya,

I recently discovered the rg_test tool, it might be of help to you. It
does currently not have a man page, but check the "Configuring and
Managing a Red Hat Cluster" chapter "Debugging and Testing Services and
Resource Ordering" [1] for usage.

Hope this is of some help to you.

[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Clust
er_Administration/s1-clust-rsc-testing-config-CA.html

Regards
-- 
Denis Braekhus



------------------------------

Message: 5
Date: Thu, 15 Jan 2009 12:00:03 +0100
From: denis <denisb+gmane@xxxxxxxxx>
Subject:  "Simple" Managed NFS setup
To: linux-cluster@xxxxxxxxxx
Message-ID: <gkn4vj$5jp$1@xxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I have begun a setup with a pretty simple 3-node cluster and a couple of
services. One of these is NFS, and I have setup the basics as laid out
in the included cluster.conf below.

A couple of questions :

1. Do I need to keep the nfs-state information on the NFS_homes volume
so as to keep it in sync between clusternodes?

2. The nfsclient name="nfs" is added to enable the current NFS serving
node to mount its own export, otherwise I got

Jan 15 11:41:03 node03 mountd[14229]: mount request from unknown host
XX.XX.XX.174 for /mnt/nfshome (/mnt/nfshome)

This is obviously caused by the mount connecting as the NFS service
address instead of the hostaddress, what is the best way to resolve
this? Mounting with the serviceaddress is not a good solution it seems,
as failing the service over is problematic when that address is in use
locally.

3. I read "The Red Hat Cluster Suite NFS Cookbook" [1], as the reference
Red Hat documentation was a bit thin regarding best practices. Is there
more documentation available to read?


Any tips/pointers/help highly appreciated.


<rm>
        <failoverdomains>
                <failoverdomain name="failover_nfshome" ordered="1"
restricted="1">
                        <failoverdomainnode name="node01.domain"
priority="30"/>
                        <failoverdomainnode name="node02.domain"
priority="30"/>
                        <failoverdomainnode name="node03.domain"
priority="10"/>
                </failoverdomain>
        </failoverdomains>
        <resources>
                <ip address="XX.XX.XX.174" monitor_link="1"/>
                <nfsexport name="NFShome"/>
                <fs device="/dev/mapper/NFS_homes" fsid="2"
force_fsck="1" force_unmount="1" fstype="ext3" mountpoint="/mnt/nfshome"
name="nfs_homes" self_fence="0"/>
                <nfsclient name="node01" options="rw"
target="node01.domain"/>
                <nfsclient name="node02" options="rw"
target="node02.domain"/>
                <nfsclient name="node03" options="rw"
target="node03.domain"/>
                <nfsclient name="nfs" options="rw" target="nfs.domain"/>
        </resources>
        <service autostart="0" domain="failover_nfshome" exclusive="0"
name="client_nfshome" recovery="restart">
                <ip ref="XX.XX.XX.174"/>
                <fs ref="nfs_homes">
                        <nfsexport name="nfshome">
                                <nfsclient ref="node01"/>
                                <nfsclient ref="node02"/>
                                <nfsclient ref="node03"/>
				<nfsclient ref="nfs"/>
                        </nfsexport>
                </fs>
        </service>
</rm>


[1] http://sources.redhat.com/cluster/doc/nfscookbook.pdf

Best Regards
-- 
Denis Braekhus



------------------------------

Message: 6
Date: Thu, 15 Jan 2009 13:48:42 +0100
From: "Alain.Moulle" <Alain.Moulle@xxxxxxxx>
Subject:  cman-2.0.98-1.el5 / question about a problem
	when	launching cman
To: linux-cluster@xxxxxxxxxx
Message-ID: <496F30AA.3000106@xxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"

Hi ,
About this problem, I wonder if it is a definitive behavior considered 
as normal ?
or if this will work differently in a next release of cman or openais ?
(in previous versions with cman-2.0.73, we did not had this problem)
Thanks if someone could give an answer...
Regards,
Alain
> Release : cman-2.0.98-1.el5
> (but same problem with 2.0.95)
>
> I face a problem when launching cman on a two-node cluster :
>
> 1. Launching cman on node 1 : OK
> 2. When launching cman on node 2, the log on node1 gives :
>     cman killed by node 2 because we rejoined the cluster without a
full 
> restart
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
https://www.redhat.com/archives/linux-cluster/attachments/20090115/dd62c
325/attachment.html

------------------------------

Message: 7
Date: Thu, 15 Jan 2009 08:59:56 -0500 (EST)
From: Bob Peterson <rpeterso@xxxxxxxxxx>
Subject: Re:  Documentation enhancement
	requests/patches?
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID:
	
<40063497.1429461232027996695.JavaMail.root@xxxxxxxxxxxxxxxxxxxxxxxxxxxx
.redhat.com>
	
Content-Type: text/plain; charset=utf-8

----- "denis" <denisb+gmane@xxxxxxxxx> wrote:
| Hi,
| 
| After getting to know the "Configuring and Managing a Red Hat
| Cluster"
| documentation [1] fairly well, I have a few enhancement suggestions.
| What is the best way to submit these?
| 
| [1]
|
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Clust
er_Administration/
| 
| Regards
| --
| Denis Braekhus

Hi Denis,

Probably the best way to do this is to open a new bugzilla record
against product Red Hat Enterprise Linux 5, component
"Documentation--cluster"

If you have permission to look at it, you can follow this example:
https://bugzilla.redhat.com/show_bug.cgi?id=471364

You can assign it to slevine@xxxxxxxxxx or pkennedy@xxxxxxxxxxx

Regards,

Bob Peterson
Red Hat GFS



------------------------------

Message: 8
Date: Thu, 15 Jan 2009 15:01:17 +0000
From: Chrissie Caulfield <ccaulfie@xxxxxxxxxx>
Subject: Re:  cman-2.0.98-1.el5 / question about a
	problem	when	launching cman
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <496F4FBD.4040607@xxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Alain.Moulle wrote:
> Hi ,
> About this problem, I wonder if it is a definitive behavior considered
> as normal ?
> or if this will work differently in a next release of cman or openais
?
> (in previous versions with cman-2.0.73, we did not had this problem)
> Thanks if someone could give an answer...
> Regards,
> Alain
>> Release : cman-2.0.98-1.el5
>> (but same problem with 2.0.95)
>>
>> I face a problem when launching cman on a two-node cluster :
>>
>> 1. Launching cman on node 1 : OK
>> 2. When launching cman on node 2, the log on node1 gives :
>>     cman killed by node 2 because we rejoined the cluster without a
full 
>> restart
> 

Alain,

I'm sure this question has been answer many times on IRC and on the
mailing list, as well as in the FAQ.


Chrissie



------------------------------

Message: 9
Date: Thu, 15 Jan 2009 15:05:33 +0000
From: Chrissie Caulfield <ccaulfie@xxxxxxxxxx>
Subject:  Re: [Openais] cman in RHEL 5 cluster suite
	and	Openais
To: unleashing_vivek007@xxxxxxxxxxx
Cc: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <496F50BD.4020409@xxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8

Vivek Purohit wrote:
> Hi Steve,
> Thanks for the previous reply.
> 
> I was able to run the checkpointing tests in the tarball Openais
> on RHEL 5.
> 
> I explored and came to know that the CMAN service of RHEL 5's
> clustersuite runs as aisexec; thus the tests could be run directly.
> 
> Can you please tell how the Openais is being used by RHEL 5's
> CMAN service.
> 

Hi,

You might like to read these two documents:

http://people.redhat.com/ccaulfie/docs/aiscman.pdf
http://people.redhat.com/ccaulfie/docs/CSNetworking.pdf


-- 

Chrissie



------------------------------

Message: 10
Date: Thu, 15 Jan 2009 09:08:08 -0700
From: Gary Romo <garromo@xxxxxxxxxx>
Subject:  GFS/clvmd question
To: linux-cluster@xxxxxxxxxx
Message-ID:
	
<OF48AE2273.1EEE35B0-ON8725753F.0057EC20-8725753F.0058A29F@xxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"


Why can't I mount my gfs logical volume on the second node in  the
cluster?
I am creating a new GFS file system on an existing cluster.  Here is
what I
did;

1.  I determined I had space in an existing volume group (both nodes)
2.  I created my logical volume (node 1)
3.  I ran my gfs_mkfs (node 1)
4.  I mounted my new lv on node 1 only

Here is the error I get on node 2

# mount /gfs/new_mount
/sbin/mount.gfs: invalid device path "/dev/vggfs/new_lv"

I see that the logical volume is "inactive" on node2 and "ACTIVE" on
node 1

inactive          '/dev/vgclgfs/new_lv' [25.00 GB] inherit

ACTIVE            '/dev/vgclgfs/new_lv' [25.00 GB] inherit

What do I need to do in order to make this logical volume active on node
2 ?
I thought that this would have happened automagically via clvmd, and not
have to be done manually.


Gary Romo
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
https://www.redhat.com/archives/linux-cluster/attachments/20090115/1be5b
3b7/attachment.html

------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 57, Issue 14
*********************************************

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux