On Wed, Dec 04, 2019 at 17:12:14 -0600, Eric Blake wrote: > On 12/3/19 11:17 AM, Peter Krempa wrote: > > This allows to start and manage the backup job. > > > > Signed-off-by: Peter Krempa <pkrempa@xxxxxxxxxx> > > --- > > po/POTFILES.in | 1 + > > src/qemu/Makefile.inc.am | 2 + > > src/qemu/qemu_backup.c | 941 +++++++++++++++++++++++++++++++++++++++ > > Large patch, but I'm not sure how it could be subdivided. > > > src/qemu/qemu_backup.h | 41 ++ > > src/qemu/qemu_driver.c | 47 ++ > > 5 files changed, 1032 insertions(+) > > create mode 100644 src/qemu/qemu_backup.c > > create mode 100644 src/qemu/qemu_backup.h > > > > > +++ b/src/qemu/qemu_backup.c > > > +static int > > +qemuBackupPrepare(virDomainBackupDefPtr def) > > +{ > > + > > + if (def->type == VIR_DOMAIN_BACKUP_TYPE_PULL) { > > + if (!def->server) { > > + def->server = g_new(virStorageNetHostDef, 1); > > + > > + def->server->transport = VIR_STORAGE_NET_HOST_TRANS_TCP; > > + def->server->name = g_strdup("localhost"); > > + } > > + > > + switch ((virStorageNetHostTransport) def->server->transport) { > > + case VIR_STORAGE_NET_HOST_TRANS_TCP: > > + /* TODO: Update qemu.conf to provide a port range, > > + * probably starting at 10809, for obtaining automatic > > + * port via virPortAllocatorAcquire, as well as store > > + * somewhere if we need to call virPortAllocatorRelease > > + * during BackupEnd. Until then, user must provide port */ > > This TODO survives from my initial code, and does not seem to be addressed > later in the series. Not a show-stopper for the initial implementation, but > something to remember for followup patches. > > > + if (!def->server->port) { > > + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", > > + _("<domainbackup> must specify TCP port for now")); > > + return -1; > > + } > > + break; > > + > > + case VIR_STORAGE_NET_HOST_TRANS_UNIX: > > + /* TODO: Do we need to mess with selinux? */ > > This should be addressed as well (or deleted, if it works out of the box). > > > > +static int > > +qemuBackupDiskPrepareOneBitmaps(struct qemuBackupDiskData *dd, > > + virJSONValuePtr actions, > > + virDomainMomentObjPtr *incremental) > > +{ > > + g_autoptr(virJSONValue) mergebitmaps = NULL; > > + g_autoptr(virJSONValue) mergebitmapsstore = NULL; > > + > > + if (!(mergebitmaps = virJSONValueNewArray())) > > + return -1; > > + > > + /* TODO: this code works only if the bitmaps are present on a single node. > > + * The algorithm needs to be changed so that it looks into the backing chain > > + * so that we can combine all relevant bitmaps for a given backing chain */ > > Correct - but mixing incremental backup with external snapshots is something > that we know is future work. It's okay for the initial implementation that > we only support a single node. > > > + while (*incremental) { > > + if (qemuMonitorTransactionBitmapMergeSourceAddBitmap(mergebitmaps, > > + dd->domdisk->src->nodeformat, > > + (*incremental)->def->name) < 0) > > + return -1; > > + > > + incremental++; > > + } > > + > > + if (!(mergebitmapsstore = virJSONValueCopy(mergebitmaps))) > > + return -1; > > + > > + if (qemuMonitorTransactionBitmapAdd(actions, > > + dd->domdisk->src->nodeformat, > > + dd->incrementalBitmap, > > + false, > > + true) < 0) > > + return -1; > > + > > + if (qemuMonitorTransactionBitmapMerge(actions, > > + dd->domdisk->src->nodeformat, > > + dd->incrementalBitmap, > > + &mergebitmaps) < 0) > > + return -1; > > + > > + if (qemuMonitorTransactionBitmapAdd(actions, > > + dd->store->nodeformat, > > + dd->incrementalBitmap, > > + false, > > + true) < 0) > > + return -1; > > Why do we need two of these calls? > /me reads on > > > + > > + if (qemuMonitorTransactionBitmapMerge(actions, > > + dd->store->nodeformat, > > + dd->incrementalBitmap, > > + &mergebitmapsstore) < 0) > > + return -1; > > okay, it looks like you are trying to merge the same bitmap into two > different merge commands, which will all be part of the same transaction. I > guess it would be helpful to see a trace of the transaction in action to see > if we can simplify it (using 2 instead of 4 qemuMonitor* commands). This is required because the backup blockjob requires the bitmap to be present on the source ('device') image of the backup. The same also applies for the image exported by NBD. The catch is that we expose the scratch file via NBD which is actually the target image of the backup. This means that in case of a pull backup we need two instances of the bitmap so both the block job and the NBD server can use them. Arguably there's a possible simplification here for push-mode backups where the target bitmap doesn't make sense. > > > + > > + > > + return 0; > > +} > > + > > + > > +static int > > +qemuBackupDiskPrepareDataOne(virDomainObjPtr vm, > > and this is as far as my review got today. I'll resume again as soon as I > can. > > Other than one obvious thing I saw in passing: > > > @@ -22953,6 +22998,8 @@ static virHypervisorDriver qemuHypervisorDriver = { > > .domainCheckpointDelete = qemuDomainCheckpointDelete, /* 5.6.0 */ > > .domainGetGuestInfo = qemuDomainGetGuestInfo, /* 5.7.0 */ > > .domainAgentSetResponseTimeout = qemuDomainAgentSetResponseTimeout, /* 5.10.0 */ > > + .domainBackupBegin = qemuDomainBackupBegin, /* 5.10.0 */ > > + .domainBackupGetXMLDesc = qemuDomainBackupGetXMLDesc, /* 5.10.0 */ > > These should be 6.0.0 > > -- > Eric Blake, Principal Software Engineer > Red Hat, Inc. +1-919-301-3226 > Virtualization: qemu.org | libvirt.org > > -- > libvir-list mailing list > libvir-list@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/libvir-list -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list