Re: RFC [3/3]: Lock manager usage scenarios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2010-09-13 at 13:35 +0100, Daniel P. Berrange wrote:
> On Fri, Sep 10, 2010 at 02:39:41PM -0600, Eric Blake wrote:
> > On 09/10/2010 10:01 AM, Daniel P. Berrange wrote:
> > >
> > >At libvirtd startup:
> > >
> > >   driver = virLockManagerPluginLoad("sync-manager");
> > >
> > >
> > >At libvirtd shtudown:
> > >
> > >   virLockManagerPluginUnload(driver)
> > 
> > Can you load more than one lock manager at a time, or just one active 
> > lock manager?  How does a user configure which lock manager(s) to load 
> > when libvirtd is started?
> 
> The intention is that any specific libvirt driver will only use
> one lock manager, but LXC vs QEMU vs UML could each use a different
> driver if required.
> 
> > >At guest startup:
> > >
> > >   manager = virLockManagerNew(driver,
> > >                               VIR_LOCK_MANAGER_START_DOMAIN,
> > >                               0);
> > >   virLockManagerSetParameter(manager, "id", id);
> > >   virLockManagerSetParameter(manager, "uuid", uuid);
> > >   virLockManagerSetParameter(manager, "name", name);
> > >
> > >   foreach disk
> > >     virLockManagerRegisterResource(manager,
> > >                                    VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
> > >                                    disk.path,
> > >                                    ..flags...);
> > >
> > >   char **supervisorargv;
> > >   int supervisorargc;
> > >
> > >   supervisor = virLockManagerGetSupervisorPath(manager);
> > >   virLockManagerGetSupervisorArgs(&argv,&argc);
> > >
> > >   cmd = qemuBuildCommandLine(supervisor, supervisorargv, supervisorargv);
> > >
> > >   supervisorpid = virCommandExec(cmd);
> > >
> > >   if (!virLockManagerGetChild(manager,&qemupid))
> > >     kill(supervisorpid); /* XXX or leave it running ??? */
> > 
> > Would it be better to first try virLockManagerShutdown?  And rather than 
> > a direct kill(), shouldn't this be virLockManagerFree?
> 
> Yes I guess so
> 
> > >During migration:
> > >
> > >   1. On source host
> > >
> > >        if (!virLockManagerPrepareMigrate(manager, hosturi))
> > >            ..don't start migration..
> > >
> > >   2. On dest host
> > >
> > >       manager = virLockManagerNew(driver,
> > >                                   VIR_LOCK_MANAGER_START_DOMAIN,
> > >                                   VIR_LOCK_MANAGER_NEW_MIGRATE);
> > >       virLockManagerSetParameter(manager, "id", id);
> > >       virLockManagerSetParameter(manager, "uuid", uuid);
> > >       virLockManagerSetParameter(manager, "name", name);
> > >
> > >       foreach disk
> > >         virLockManagerRegisterResource(manager,
> > >                                        VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
> > >                                        disk.path,
> > >                                        ..flags...);
> > 
> > So if there needs to be any relaxation of locks from exclusive to 
> > shared-write for the duration of the migration, that would be the 
> > responsibility of virLockManagerPrepareMigrate, and not done directly by 
> > libvirt?
> 
> As with my other reply on this topic, I didn't want to force a particular
> design / implementation strategy for migration, so I just put in actions
> at each key stage of migration. The driver impl can decide whether todo
> a plain release+reacquire, or use some kind of shared lock
> 
> > >       char **supervisorargv;
> > >       int supervisorargc;
> > >
> > >       supervisor = virLockManagerGetSupervisorPath(manager);
> > >       virLockManagerGetSupervisorArgs(&argv,&argc);
> > >
> > >       cmd = qemuBuildCommandLine(supervisor, supervisorargv, 
> > >       supervisorargv);
> > >
> > >       supervisorpid = virCommandExec(cmd);
> > >
> > >       if (!virLockManagerGetChild(manager,&qemupid))
> > >         kill(supervisorpid); /* XXX or leave it running ??? */
> > >
> > >   3. Initiate migration in QEMU on source and wait for completion
> > >
> > >   4a. On failure
> > >
> > >       4a1 On target
> > >
> > >             virLockManagerCompleteMigrateIn(manager,
> > >                                             VIR_LOCK_MANAGER_MIGRATE_CANCEL);
> > >             virLockManagerShutdown(manager);
> > >             virLockManagerFree(manager);
> > >
> > >       4a2 On source
> > >
> > >             virLockManagerCompleteMigrateIn(manager,
> > >                                             VIR_LOCK_MANAGER_MIGRATE_CANCEL);
> > 
> > Wouldn't this be virLockManagerCompleteMigrateOut?
> 
> Opps, yes.
> 
> > 
> > >
> > >   4b. On succcess
> > >
> > >
> > >       4b1 On target
> > >
> > >             virLockManagerCompleteMigrateIn(manager, 0);
> > >
> > >       42 On source
> > >
> > >             virLockManagerCompleteMigrateIn(manager, 0);
> > 
> > Likewise?
> 
> Yes
> 
> > >             virLockManagerShutdown(manager);
> > >             virLockManagerFree(manager);
> > >
> > >
> > >Notes:
> > >
> > >   - If a lock manager impl does just VM level leases, it can
> > >     ignore all the resource paths at startup.
> > >
> > >   - If a lock manager impl does not support migrate
> > >     it can return an error from all migrate calls
> > >
> > >   - If a lock manger impl does not support hotplug
> > >     it can return an error from all resource acquire/release calls
> > >
> > 
> > Overall, this looks workable to me.  As proposed, this assumes a 1:1 
> > relation between LockManager process and managed VMs.  But I guess you 
> > can still have a central manager process that manages all the VMs, by 
> > having the lock manager plugin spawn a simple shim process that does all 
> > the communication with the central lock manager.
> 
> I could have decided it such that it didn't assume presence of a angel
> process around each VM, but I think it is easier to be able to presume
> that there is one. It can be an incredibly thin stub if desired, so I
> don't think it'll be too onerous on implementations

> We are looking into the possibility of not having a process manage a
VM but rather having the sync_manager process register with a central
daemon and exec into qemu (or anything else) so assuming there is a
process per VM is essentially false. But the verb could be used for
"unregistering" the current instance with the main manager so the verb
does have it use.

Further more, even if we decide to leave the current 'sync_manager
process per child process' system as is for now. The general direction
is a central deamon per host for managing all the leases and guarding
all processes. So be sure to keep that in mind while assembling the API.
> 
> Daniel


--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]