Hello Digimer Thank you for your advice. * GNBD I've already succeed to mount GNBD. locking_type = 1 Should I change lock_type = 3 ?, If not, what problem will be occur?? * fence_apc some of reason, I can't get use APC switch. (That configuration example is test environment. ) so I asked alternative solution. * fence_wol I can't find fence_wake_on_lan. so I'm thinking to create it. WOL supports Power on and Power off ( I'll test later ). So, It's will be fence tool. And I downloaded fence_na, It was written in Perl script. so I want to change fence_na to use wol command. Could you point me to good reference to build fence_wol. (Of course!!. fence_na is good reference) Thank you for your advice again. Regards. 2011/5/30 Digimer <linux@xxxxxxxxxxx>: > On 05/29/2011 06:14 AM, Hiroyuki Sato wrote: >> >> Hello Digimer. >> >> Thank you for your information. >> >> This is the document that I'm looking for!!. >> This doc is very very usuful. Thanks!!. > > Wonderful, I'm glad you find it useful. :) > >> I want to ask one thing. >> >> Please take a look my cluster configration again. > > Will do, comments will be in-line. > >> Mainly I want to use GNBD on gfs_clientX. >> GNBD server is gfs2, and gfs3. >> >> And gfs_client's hardwhere does not support IPMI, iLO..., >> Because That machine is Desktop computers. >> >> And no APC like UPS. >> >> The desktop machine is just support Wake On LAN. >> >> What fence device should I use?? >> I'm thinking fence_wake_on_lan is proper fence device. >> but that is nothing.. > > The least expensive option for a commercial product would be APC's switched > PDU. You have 13 machines, so you would need either 2 of the 1U models, or 1 > of the 0U models. > > If you are in North America, you can use these: > > http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7900 > > or > > http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7931 > > If you are in Japan, you'll need to select the best one of these: > > http://www.apc.com/products/family/index.cfm?id=70&ISOCountryCode=JP > > Whichever you get, you can use the 'fence_apc' fence agent. > >> Thank you for your advice. >> >> <?xml version="1.0"?> >> <cluster name="arch_gfs1" config_version="21"> >> <clusternodes> >> <clusternode name="gfs1.archsystem.com" votes="1" nodeid="5"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs1.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs2.archsystem.com" votes="1" nodeid="6"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs2.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs3.archsystem.com" votes="1" nodeid="7"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs3.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client1.archsystem.com" votes="1" nodeid="21"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client1.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client2.archsystem.com" votes="1" nodeid="22"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client2.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client3.archsystem.com" votes="1" nodeid="23"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client3.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client4.archsystem.com" votes="1" nodeid="24"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client4.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client5.archsystem.com" votes="1" nodeid="25"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client5.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client6.archsystem.com" votes="1" nodeid="26"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client6.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client7.archsystem.com" votes="1" nodeid="27"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client7.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client8.archsystem.com" votes="1" nodeid="28"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client8.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client9.archsystem.com" votes="1" nodeid="29"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client9.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> <clusternode name="gfs_client10.archsystem.com" votes="1" nodeid="30"> >> <fence> >> <method name="single"> >> <device name="manual" nodename="gfs_client10.archsystem.com"/> >> </method> >> </fence> >> </clusternode> >> </clusternodes> >> <fencedevices> >> <fencedevice name="manual" agent="fence_manual"/> >> </fencedevices> >> <rm> >> <failoverdomains/> >> <resources/> >> </rm> >> </cluster> >> >> Regards. > > Outside of the "fence_manual" issue, this looks fine. You will probably want > to get the GFS and GNBD stuff into rgmanager, but that can come later after > you have fencing working and the core of the cluster tested and working. > > Take a look at this: > > http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_Network_Block_Device/s1-gnbd-mp-sn.html > > It discusses fencing with GNBD. Below is the start of the Red Hat document > on GNBD in EL5 that you may find helpful, if you haven't read it already. > > http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_Network_Block_Device/ch-gnbd.html > > Let me know if you want/need any more help. I'll be happy to see what I can > do. > > -- > Digimer > E-Mail: digimer@xxxxxxxxxxx > Freenode handle: digimer > Papers and Projects: http://alteeve.com > Node Assassin: http://nodeassassin.org > "I feel confined, only free to expand myself within boundaries." > -- Hiroyuki Sato -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster