On Wed, 2011-06-22 at 11:58 -0400, Victor Ramirez wrote: > You dont need qpid Good, makes life easier Forgot to say it before but there is no plans to make the guests move around so guest1 will always be on host1 and I think with that config I don't need qpid. > > First of all, try setting SELinux to permissive on your guests or else > the fenced process will not be allowed to send a multicast packet It's permissive for other reasons (but I like to enable it) > > > Second of all, remember to set the fence_xvm.key files as such: > host1 key = guest2 key > host2 key = guest1 key I have same key on all 4 nodes (for now at least) and I did fix the error I see on all howtos dd if=/dev/random bs=4096 count=1 of=/etc/cluster/fence_xvm.key Is wrong, because it fails way fast and I got a 20-200byte file when /dev/random ran out. dd if=/dev/random bs=1 count=4096 of=/etc/cluster/fence_xvm.key Works, get a 4096byte file but takes forever, specially on a remote server (in which case I would generate the file locally and scp to remote) dd if=/dev/urandom bs=1 count=4096 of=/etc/cluster/fence_xvm.key Goes fast and is good enough for me. > > so that guest1 send a multicast signal to host2 to fence guest2. Right. > > I did find a config error so now I can kill guest2 from guest1 (had dmzip instead of privip in the config file) but it is still a problem with that I only see one hosts guest, not both and that means I can only kill one way, not both ways. /ps > > 2011/6/22 Peter Sjoberg <henahadu@xxxxxxxxx> > I have two KVM hosts with some clustered guests that I'm > trying to setup > fencing for using fence_virtd and I wonder if this is even > suppose to > work, that guest on one host tells the other host to kill it's > guest. > I wonder if I need to add some qpid stuff for the two hosts to > work > together. > > Setup: > I have two kvm hosts, lets call them host1 & host2. > Each hosts has a guest (guest1 on host1 & guest2 on host2) and > this > guests will be clustered with each other. > The hosts normal network is internal only and originates on > host > eth0/br0 > The guests have a separate DMZ network segment, and originates > as > bridged on host eth1/br1, host has no ip on br1 > The guests also have a private link between each other and > originates on > host eth2/br10 (crossover cable between the two hosts). > > To bypass multicast routing problems I have on the host side > added an ip > to the private link and running /usr/sbin/fence_virtd set to > listen to > br10 > > The intent is that guest1 running on host1 should be able to > fence by > telling host2 to kill guest2 but this doesn't work. > On the guest side I test this with "fence_xvm -o list" and I > get a list > of all guests on one of the hosts, I expected combined list. > What host list I get depends, mostly I get same as the host > I'm running > on or the first _virtd started. > I think the multicast part works because when I start > fence_virtd on one > host (host1 or host2) I can issue "fence_xvm -o list" on all 4 > nodes and > get the a list of guests from the host I started it on. > > One other thing that fails is the killing part. > I start fence_virtd on host2 and then on guest1 I issue > fence_xvm -H <UUID of guest2> -o restart > and it just returns "permission denied" > > So, first of all, is it suppose to work and I just messed up > my config > or do I need to figure out how to add qpid (or something else) > to my > setup? > > -- > ------------------------------------------------------------------- > Techwiz, Peter Sjoberg PGP key (12F506C8) on keyserver & > homepage > Key fingerprint = 3DC2 CEBA 1590 B41A 3780 955A DB42 02BB > 12F5 06C8 > mailto:peters-redhat AT techwiz.ca > http://www.techwiz.ca/~peters > > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster >
Attachment:
signature.asc
Description: This is a digitally signed message part
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster