I have a problem when I'm trying to mount the partition. Where is my mistake? [root@vserver1 ~]# clustat Member Status: Quorate Member Name ID Status ------ ---- ---- ------ vserver1.teste.br 1 Online, Local, rgmanager vserver2.teste.br 2 Offline Service Name Owner (Last) State ------- ---- ----- ------ ----- service:teste (none) stopped [root@vserver1 ~]# cat /etc/cluster/cluster.conf <?xml version="1.0" ?> <cluster alias="cluster1" config_version="19" name="cluster1"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="vserver1.teste.br" nodeid="1" votes="1"> <fence> <method name="1"> <device name="manual" nodename="vserver1.teste.br"/> </method> </fence> </clusternode> <clusternode name="vserver2.teste.br" nodeid="2" votes="1"> <fence> <method name="1"> <device name="manual" nodename="vserver2.teste.br"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="1" two_node="1"/> <fencedevices> <fencedevice agent="fence_manual" name="manual"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name="gfsaluno" ordered="1" restricted="0"> <failoverdomainnode name="vserver1.teste.br" priority="1"/> <failoverdomainnode name="vserver2.teste.br" priority="2"/> </failoverdomain> </failoverdomains> <resources> <clusterfs device="/dev/VGALUNO/LVAluno" force_unmount="0" fsid="63078" fstype="gfs" mountpoint="/storage/aluno" name="gfsaluno" options=""/> </resources> <service autostart="1" domain="gfsaluno" name="teste"> <clusterfs ref="gfsaluno"/> </service> </rm> </cluster> [root@vserver1 ~]# mount -t gfs /dev/VGALUNO/LVAluno /storage/aluno/ /sbin/mount.gfs: node not a member of the default fence domain /sbin/mount.gfs: error mounting lockproto lock_dlm [root@vserver1 ~]# lsmod | grep dlm lock_dlm 56009 1 gfs2 522861 1 lock_dlm dlm 153185 13 lock_dlm configfs 62301 2 dlm [root@vserver1 ~]# gfs_edit /dev/VGALUNO/LVAluno Block #10 of 31FF000 (superblock) (1 of 4) 00010000 01161970 00000001 00000000 00000000 [...p............] 00010010 00000064 00000000 0000051D 00000579 [...d...........y] 00010020 00000000 00001000 0000000C 00000010 [................] 00010030 00000000 00000032 00000000 00000032 [.......2.......2] 00010040 00000000 00000033 00000000 00000033 [.......3.......3] 00010050 00000000 00000036 00000000 00000036 [.......6.......6] 00010060 6C6F636B 5F646C6D 00000000 00000000 [lock_dlm........] 00010070 00000000 00000000 00000000 00000000 [................] 00010080 00000000 00000000 00000000 00000000 [................] 00010090 00000000 00000000 00000000 00000000 [................] 000100A0 636C7573 74657231 3A676673 616C756E [cluster1:gfsalun] 000100B0 6F000000 00000000 00000000 00000000 [o...............] ------------------------------ Marcos Ferreira da Silva Digital Tecnologia Uberlandia-MG (34) 9154-0150 / 3226-2534 -------- Mensagem Original -------- > De: jr <johannes.russek@xxxxxxxxxxxxxxxxx> > Enviado: quarta-feira, 6 de fevereiro de 2008 9:07 > Para: marcos@xxxxxxxxxxxxxxxxxxxxxx, linux clustering <linux-cluster@xxxxxxxxxx> > Assunto: SPAM-LOW: Re: XEN VM Cluster > > > How could configure a partition to share a VM config for two machines? > > Could you send me your cluster.conf for I compare with I want to do? > > no need for my cluster.conf. just use a GFS partition and it will be > fine. (don't forget to put it into fstab) > > > > > Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine. > > Is it correct? > > i don't know what you mean by "real machine", but your guests not only > need the config, they will also need some storage for their system. > that's where you need a storage that's connected to your nodes, wether > it's luns, lvm lvs or image files, no matter. just keep in mind that if > you are using image files, you need to place them on GFS so that every > node in your cluster can access them the same. > > > > > When I start a VM in a node 1 it will start in a physical device. > > If I disconnect the node 1, will the vm migrate to node 2? > > Will the clients connections lose? > > it's just failover, which means that if the cluster sees a problem with > one of the nodes, the other node will take over it's services, which > basically means that the vms will be started on the other node. > that does mean that your clients will get disconnected. > > > > > I'm use a HP Storage and a two servers with multipath with emulex fiber channel. > > should be fine. > > johannes -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster