The nvdimm devices are expected to ensure write persistence during power failure kind of scenarios. The libpmem has architecture specific instructions like dcbf on power to flush the cache data to backend nvdimm device during normal writes. Qemu - virtual nvdimm devices are memory mapped. The dcbf in the guest doesn't traslate to actual flush to the backend file on the host in case of file backed v-nvdimms. This is addressed by virtio-pmem in case of x86_64 by making explicit flushes translating to fdatasync at qemu. On PAPR, the issue is addressed by adding a new hcall to request for an explicit flush from the guest ndctl driver when the backend nvdimm cannot ensure write persistence with dcbf alone. So, the approach here is to convey when the hcall flush is required in a device tree property. The guest makes the hcall when the property is found, instead of relying on dcbf. The first patch adds the necessary asynchronous hcall support infrastructure code at the DRC level. Second patch implements the hcall using the infrastructure. Hcall number and semantics finalized, so dropping the RFC prefix. A new device property sync-dax is added to the nvdimm device. When the sync-dax is off(default), device property "hcall-flush-required" is set, and the guest makes hcall H_SCM_FLUSH requesting for an explicit flush. By default, sync-dax is "off" on all new pseries machines and prior to 5.2 its "on", The below demonstration shows the map_sync behavior with sync-dax on & off. (https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/memory/ndctl.py.data/map_sync.c) The pmem0 is from nvdimm with With sync-dax=on, and pmem1 is from nvdimm with syn-dax=off, mounted as /dev/pmem0 on /mnt1 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota) /dev/pmem1 on /mnt2 type xfs (rw,relatime,attr2,dax=always,inode64,logbufs=8,logbsize=32k,noquota) [root@atest-guest ~]# ./mapsync /mnt1/newfile ----> When sync-dax=off [root@atest-guest ~]# ./mapsync /mnt2/newfile ----> when sync-dax=on Failed to mmap with Operation not supported The first patch does the header file cleanup necessary for the subsequent ones. Second patch implements the hcall, adds the necessary vmstate properties to spapr machine structure for carrying the hcall status during save-restore. The nature of the hcall being asynchronus, the patch uses aio utilities to offload the flush. The third patch adds the 'sync-dax' device property and enables the device tree property for the guest to utilise the hcall. --- v2 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg07031.html Changes from v2: - Using the thread pool based approach as suggested by Greg - Moved the async hcall handling code to spapr_nvdimm.c along with some simplifications - Added vmstate to preserve the hcall status during save-restore along with pre_save handler code to complete all ongoning flushes. - Added hw_compat magic for sync-dax 'on' on previous machines. - Miscellanious minor fixes. v1 - https://lists.gnu.org/archive/html/qemu-devel/2020-11/msg06330.html Changes from v1 - Fixed a missed-out unlock - using QLIST_FOREACH instead of QLIST_FOREACH_SAFE while generating token Shivaprasad G Bhat (3): spapr: nvdimm: Forward declare and move the definitions spapr: nvdimm: Impletment scm flush hcall spapr: nvdimm: Enable sync-dax device property for nvdimm hw/core/machine.c | 1 hw/mem/nvdimm.c | 1 hw/ppc/spapr.c | 6 + hw/ppc/spapr_nvdimm.c | 269 +++++++++++++++++++++++++++++++++++++++++ include/hw/mem/nvdimm.h | 10 ++ include/hw/ppc/spapr.h | 12 ++ include/hw/ppc/spapr_nvdimm.h | 34 +++-- 7 files changed, 317 insertions(+), 16 deletions(-) -- Signature