Re: vm.sh with and without virsh

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2009/10/5 Edson Marquezani Filho <edsonmarquezani@xxxxxxxxx>:
> On Mon, Oct 5, 2009 at 11:43, brem belguebli <brem.belguebli@xxxxxxxxx> wrote:
>> Hi,
>>
>> To give an example of setup that did "surprisingly" work like a charm
>> out of the box  (RHEL 5.4 KVM)
>>
>> -  3 nodes cluster (RHEL 5.4 x86_64)
>> -  2  50 GB SAN LUN's (partitionned p1= 100 MB, p2=49.9 GB)
>>    /dev/mpath/mpath4 (mpath4p1, mpath4p2)
>>    /dev/mpath/mpath5 (mpath5p1, mpath5p2)
>> -  3 mirrored LV's  lvolVM1, lvolVM2 and lvolVM3 on mpath4p2/mpath5p2
>> and mpath4p1 as mirrorlog
>
> I don't know about this mirroring feature. How does it work and why do
> you use it ?
>
I'm trying to build a setup across 2 sites which doesn't bring nothing
very important to the current topic,
it's just my setup ;-)

>> - cmirror to maintain mirror log across the cluster
>> LV's are activated "shared", ie active on all nodes,  no exclusive
>> activation being used.
>>
>> Each VM using a LV as virtual disk device (VM XML conf file):
>>
>>  <disk type='block' device='disk'>
>>      <source dev='/dev/VMVG/lvolVM1'/> <-- for VM1
>>
>> Each VM being defined in the cluster.conf with no hierarchical
>> dependency on anything:
>>
>> <rm>
>>               <vm autostart="0" name="testVM1" recovery="restart"
>> use_virsh="1"/>
>>               <vm autostart="0" name="testVM2" recovery="restart"
>> use_virsh="1"/>
>>               <vm autostart="0" name="testVM3" recovery="restart"
>> use_virsh="1"/>
>> </rm>
>>
>> Failover and live migration work fine
>
> I tought that live migration without any access control on LVs would
> cause some corruption on file systems. But, I guess that even without
> exclusive activation, I should use CLVM, should I ?
>
there is no filesystem involved here, just raw devices with a boot
sector and so on...

>> VM's must be defined on all nodes (after creation on one node, copy
>> the VM xml conf file to the other nodes and issue a virsh define
>> /Path/to/the/xml file)
>
> I'm not using virsh because I have just learned the old-school way to
> control VMs with xm. When I knew about that virsh tool, I had already
> modified config files manually.
> Would be better that I recreate all of them using libvirt infrastructure?
>
As mentioned above, my setup is KVM based, not xen, I just cannot use
xm things...

>> The only thing that may look unsecure is the fact that the LV's are
>> active on all the nodes, a problem could happen if someone manually
>> started the VM's on some nodes while already active on another one.
>
> That's the point who made me ask for help here sometime ago, and what
> more concerns me.

cf above, my point about the fact that there is no filesystem involved ..

> Rafael Miranda told me about his lvm-cluster resource script. So, I
> developed a simple script, that performs start, stop, and status
> operations. For stop, it saves the VM to a stat file. For start, it
> either restores the VM if there is a stat file for ir, or creates it
> if there is not. Status just return sucess if the VM appears on xm
> list, or failure if not. Stats files should be saved on a GFS
> directory, mounted on both nodes.
>
Rafael's resource works fine, but the thing with VM's is that one wants to
still benefit from live migration capabilities, etc...

As stated Lon, if the VM is part of a given service, live migration
won't be possible.

> Then, I configure each VM as a service, with its lvm-cluster and
> script resources.
>
> So, relocating a "vm service" will look like a semi-live migration, if
> I can call like this. =) Actually, it saves the VM in shared directory
> and restores it on the other node in a little time, without reseting
> it. It will look just like the VM had stopped for a little time and
> came back.
>
> But now I'm thinking if I have tried to reinvent the whell. =)
>
Well, saving on disk can take time, I'm not sure it's the way to take
for migration
purposes. The faster it is the better it will be.
Your approach would require a VM freeze, than dump on shared disk
(time proportionnal to VM size)
than the VM wakup on the other node.
Imagine the consequences of a clock jump on a server VM...

>> I'll try the setup with exclusive activation and check if live
>> migration still works (I doubt that).
>>
I was a bit optimistic, thinking that libvirt/virsh was doing a
vgchange -a y if the
"disk source dev" is a LV, in fact it doesn't.
It doesn't seem to be possible to benefit from live migration when
exclusive activation is "active".
Or if anyone has the thing to instruct libvirt/virsh to execute a
script prior to accessing the storage...

>> Brem
>>
>
> What do you think about this?
>
> Thank you.
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux