On Mon, Jan 29, 2024 at 16:38:44 -0600, Jonathon Jongsma wrote: > On 1/29/24 2:25 AM, Peter Krempa wrote: > > On Thu, Jan 25, 2024 at 14:52:11 -0600, Jonathon Jongsma wrote: [...] > > [...] > > > > > @@ -8059,6 +8061,11 @@ qemuDomainStorageSourceAccessModify(virQEMUDriver *driver, > > > revoke_nvme = true; > > > + if (qemuNbdkitStartStorageSource(driver, vm, src, chain) < 0) > > > + goto revoke; > > > + > > > + revoke_nbdkit = true; > > > + > > > > Note that qemuDomainStorageSourceAccessModify can be called on already > > existing 'virStorageSource' to e.g. change from read-only to read-write > > mode or back. > > That's precisely why I put it inside of this block: > > if (!(flags & QEMU_DOMAIN_STORAGE_SOURCE_ACCESS_MODIFY_ACCESS)) { > > And it seems that the MODIFY_ACCESS flag is always set unless we are adding > a new source since this is also the way that new nvme devices are set up. > > > [...] > > > > > if (qemuDomainNamespaceSetupDisk(vm, src, &revoke_namespace) < 0) > > > goto revoke; > > > } > > > > > > > > > @@ -8113,6 +8120,9 @@ qemuDomainStorageSourceAccessModify(virQEMUDriver *driver, > > > VIR_WARN("Unable to release lock on %s", srcstr); > > > } > > > + if (revoke_nbdkit) > > > + qemuNbdkitStopStorageSource(src, vm, chain); > > > + > > > cleanup: > > > src->readonly = was_readonly; > > > virErrorRestore(&orig_err); > > > diff --git a/src/qemu/qemu_nbdkit.c b/src/qemu/qemu_nbdkit.c > > > index edbe2137d0..73dc90af3e 100644 > > > --- a/src/qemu/qemu_nbdkit.c > > > +++ b/src/qemu/qemu_nbdkit.c > > > @@ -1195,6 +1195,13 @@ qemuNbdkitProcessStart(qemuNbdkitProcess *proc, > > > struct nbd_handle *nbd = NULL; > > > #endif > > > + /* don't try to start nbdkit again if we've already started it */ > > > + if (proc->pid > 0) { > > > + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", > > > + _("Attempting to start nbdkit twice")); > > > + return -1; > > > + } > > > > [...] Which means that this WILL get hit multiple times. Additionally > > don't forget that qemuDomainStorageSourceAccessModify is called from. > > qemuDomainStorageSourceChainAccessAllow. > > > > Which is e.g. called from qemuDomainAttachDeviceDiskLiveInternal where > > we have this code: > > > > if (!virStorageSourceIsEmpty(disk->src)) { > > if (qemuDomainPrepareDiskSource(disk, priv, cfg) < 0) > > goto cleanup; > > > > if (qemuDomainDetermineDiskChain(driver, vm, disk, NULL) < 0) > > goto cleanup; > > > > if (qemuDomainStorageSourceChainAccessAllow(driver, vm, disk->src) < 0) > > > > ^^^^^^^^^^^^^^^^^^^ > > > > goto cleanup; > > > > releaseSeclabel = true; > > > > if (qemuProcessPrepareHostStorageDisk(vm, disk) < 0) > > goto cleanup; > > > > if (qemuHotplugAttachManagedPR(vm, disk->src, VIR_ASYNC_JOB_NONE) < 0) > > goto cleanup; > > > > if (qemuNbdkitStartStorageSource(driver, vm, disk->src, true) < 0) > > > > > > ^^^^^^^^^^^^^^^^^ > > > > goto cleanup; > > } > > > As you point out, I didn't catch this case. It seems to me that the right > approach here is to just remove the call to qemuNbdkitStartStorageSource() > from here and instead rely on the qemuDomainStorageSourceChainAccessAllow() > call above to initialize nbdkit. > > > > > + > > > if (!(cmd = qemuNbdkitProcessBuildCommand(proc))) > > > return -1; > > > > Also conceptually the qemuDomainStorageSourceAccessModify function deals > > with security and permissions of image files. I don't think that the > > NBDKit helper process code belongs into it. > > > > NACK > > > > > Fair point, but we'll still need to do something to start the nbdkit process > in the same basic code path as these calls to AccessAllow()/AccessRevoke() > if we want to support nbdkit-backed disks in these paths. And IMO it's not > all that dissimilar to the work that we're doing here to detach the nvme > devices from the host, etc. Or is there a better spot to handle this? Hmm, good point. I didn't realize that the setup of the NVMe device is so complex and that it has crept into this function. Based on that I can't really argue against setting up nbdkit there too. Once you fix the issue I've pointed out above: Reviewed-by: Peter Krempa <pkrempa@xxxxxxxxxx> _______________________________________________ Devel mailing list -- devel@xxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxx