On 9/21/21 12:02, David Gibson wrote:
On Wed, Jul 07, 2021 at 09:57:31PM -0500, Shivaprasad G Bhat wrote:
If the device backend is not persistent memory for the nvdimm, there is
need for explicit IO flushes on the backend to ensure persistence.
On SPAPR, the issue is addressed by adding a new hcall to request for
an explicit flush from the guest when the backend is not pmem. So, the
approach here is to convey when the hcall flush is required in a device
tree property. The guest once it knows the device backend is not pmem,
makes the hcall whenever flush is required.
To set the device tree property, the patch introduces a new papr specific
device type inheriting the nvdimm device. When the backend doesn't have
pmem="yes", the device tree property "ibm,hcall-flush-required" is set,
and the guest makes hcall H_SCM_FLUSH requesting for an explicit flush.
Signed-off-by: Shivaprasad G Bhat <sbhat@xxxxxxxxxxxxx>
<snip>
@@ -91,6 +93,14 @@ bool spapr_nvdimm_validate(HotplugHandler *hotplug_dev, NVDIMMDevice *nvdimm,
return false;
}
+ if (object_dynamic_cast(OBJECT(nvdimm), TYPE_SPAPR_NVDIMM) &&
+ (memory_region_get_fd(mr) < 0)) {
+ error_setg(errp, "spapr-nvdimm device requires the "
+ "memdev %s to be of memory-backend-file type",
+ object_get_canonical_path_component(OBJECT(dimm->hostmem)));
It's not obvious to me why the spapr nvdimm device has an additional
restriction here over the regular nvdimm device.
For memory-backend-ram the fd is set to -1. The fdatasync would fail
later. This restriction is for preventing the hcall failure later. May
be it is intentionally allowed with nvdimms for testing purposes. Let me
know if you want me to allow it with a dummy success return for the hcall.
+ return false;
+ }
+
return true;
}
@@ -162,6 +172,21 @@ static int spapr_dt_nvdimm(SpaprMachineState *spapr, void *fdt,
"operating-system")));
_FDT(fdt_setprop(fdt, child_offset, "ibm,cache-flush-required", NULL, 0));
+ if (object_dynamic_cast(OBJECT(nvdimm), TYPE_SPAPR_NVDIMM)) {
+ bool is_pmem = false;
+#ifdef CONFIG_LIBPMEM
+ PCDIMMDevice *dimm = PC_DIMM(nvdimm);
+ HostMemoryBackend *hostmem = dimm->hostmem;
+
+ is_pmem = object_property_get_bool(OBJECT(hostmem), "pmem",
+ &error_abort);
Presenting to the guest a property of the backend worries me
slightly. How the backends are synchronized between the source and
destination is out of scope for qemu: is there any possibility that we
could migrate from a host where the backend is pmem to one where it is
not (or the reverse).
I think at the least we want a property on the spapr-nvdimm object
which will override what's presented to the guest (which, yes, might
mean lying to the guest). I think that could be important for
testing, if nothing else.
Mix configurations can be attempted on a nested setup itself.
On a side note, the attempts to use pmem=on on non-pmem backend is being
deprecated as that is unsafe pretension effective commit cdcf766d0b0.
I see your point, adding "pmem-override"(?, suggest me if you have
better name) to spapr-nvdimm can be helpful. Adding it to spapr-nvdimm
device. With pmem-override "on" device tree property is added allowing
hcall-flush even when pmem=on for the backend. This works for migration
compatibility in such a setup.
+#endif
+ if (!is_pmem) {
+ _FDT(fdt_setprop(fdt, child_offset, "ibm,hcall-flush-required",
+ NULL, 0));
+ }
+ }
+
return child_offset;
}
@@ -585,7 +610,16 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprMachineState *spapr,
}
dimm = PC_DIMM(drc->dev);
+ if (!object_dynamic_cast(OBJECT(dimm), TYPE_SPAPR_NVDIMM)) {
+ return H_PARAMETER;
+ }
Hmm. If you're going to make flushes specific to spapr nvdimms, you
could put the queue of pending flushes into the spapr-nvdimm object,
rather than having a global list in the machine.
Yes. I have changed the patches to move all the flush specific data
structures into the spapr-nvdimm object.
+
backend = MEMORY_BACKEND(dimm->hostmem);
+#ifdef CONFIG_LIBPMEM
+ if (object_property_get_bool(OBJECT(backend), "pmem", &error_abort)) {
+ return H_UNSUPPORTED;
Could you make this not be UNSUPPORTED, but instead fake the flush for
the pmem device? Either as a no-op, or simulating the guest invoking
the right cpu cache flushes? That seems like it would be more useful:
that way users who don't care too much about performance could just
always do a flush hcall and not have to have another path for the
"real" pmem case.
It would actually be wrong use for kernel to attempt that. The device
tree property is checked before setting the callback to flush in the
kernel. If someone makes the hcall without the device tree property
being set, it would actually be a mistaken/wrong usage.
For pmem-override=on, its better to allow this as you suggested along
with exposing the device tree property. Will call the pmem_persist() for
pmem backed devices. Switch between pmem_persist() or fdatasync() based
on the backend type while flushing.
+ }
+#endif
fd = memory_region_get_fd(&backend->mr);
if (fd < 0) {
@@ -766,3 +800,15 @@ static void spapr_scm_register_types(void)
Thanks!