Hi Dan, Thanks for your reply. > > On Fri, Jan 12, 2018 at 10:23 PM, Pankaj Gupta <pagupta@xxxxxxxxxx> wrote: > > > > Hello Dan, > > > >> Not a flag, but a new "Address Range Type GUID". See section "5.2.25.2 > >> System Physical Address (SPA) Range Structure" in the ACPI 6.2A > >> specification. Since it is a GUID we could define a Linux specific > >> type for this case, but spec changes would allow non-Linux hypervisors > >> to advertise a standard interface to guests. > >> > > > > I have added new SPA with a GUUID for this memory type and I could add > > this new memory type in System memory map. I need help with the namespace > > handling for this new type As mentioned in [1] discussion: > > > > - Create a new namespace for this new memory type > > - Teach libnvdimm how to handle this new namespace > > > > I have some queries on this: > > > > 1] How namespace handling of this new memory type would be? > > This would be a namespace that creates a pmem device, but does not allow DAX. o.k > > > > > 2] There are existing namespace types: > > ND_DEVICE_NAMESPACE_IO, ND_DEVICE_NAMESPACE_PMEM, ND_DEVICE_NAMESPACE_BLK > > > > How libnvdimm will handle this new name-space type in conjuction with > > existing > > memory type, region & namespaces? > > The type will be either ND_DEVICE_NAMESPACE_IO or > ND_DEVICE_NAMESPACE_PMEM depending on whether you configure KVM to > provide a virtual NVDIMM and label space. In other words the only > difference between this range and a typical persistent memory range is > that we will have a flag to disable DAX operation. o.k. In short we have disable this flag 'QUEUE_FLAG_DAX' for this namespace & region? Also don't execute below code for this new type? pmem_attach_disk() ... ... dax_dev = alloc_dax(pmem, disk->disk_name, &pmem_dax_ops); if (!dax_dev) { put_disk(disk); return -ENOMEM; } dax_write_cache(dax_dev, wbc); pmem->dax_dev = dax_dev; > > See the usage of nvdimm_has_cache() in pmem_attach_disk() as an > example of how to pass attributes about the "region" to the the pmem > driver. sure. > > > > > 3] For sending guest to host flush commands we still have to think about > > some > > async way? > > I thought we discussed this being a paravirtualized virtio command ring? o.k. will implement this. >