On Wed, 2017-07-26 at 14:40 -0700, Dan Williams wrote: > On Wed, Jul 26, 2017 at 2:27 PM, Rik van Riel <riel@xxxxxxxxxx> > wrote: > > On Wed, 2017-07-26 at 09:47 -0400, Pankaj Gupta wrote: > > > > > > > > > > Just want to summarize here(high level): > > > > > > This will require implementing new 'virtio-pmem' device which > > > presents > > > a DAX address range(like pmem) to guest with read/write(direct > > > access) > > > & device flush functionality. Also, qemu should implement > > > corresponding > > > support for flush using virtio. > > > > > > > Alternatively, the existing pmem code, with > > a flush-only block device on the side, which > > is somehow associated with the pmem device. > > > > I wonder which alternative leads to the least > > code duplication, and the least maintenance > > hassle going forward. > > I'd much prefer to have another driver. I.e. a driver that refactors > out some common pmem details into a shared object and can attach to > ND_DEVICE_NAMESPACE_{IO,PMEM}. A control device on the side seems > like > a recipe for confusion. At that point, would it make sense to expose these special virtio-pmem areas to the guest in a slightly different way, so the regions that need virtio flushing are not bound by the regular driver, and the regular driver can continue to work for memory regions that are backed by actual pmem in the host? > With a $new_driver in hand you can just do: > > modprobe $new_driver > echo $namespace > /sys/bus/nd/drivers/nd_pmem/unbind > echo $namespace > /sys/bus/nd/drivers/$new_driver/new_id > echo $namespace > /sys/bus/nd/drivers/$new_driver/bind > > ...and the guest can arrange for $new_driver to be the default, so > you > don't need to do those steps each boot of the VM, by doing: > > echo "blacklist nd_pmem" > /etc/modprobe.d/virt-dax-flush.conf > echo "alias nd:t4* $new_driver" >> /etc/modprobe.d/virt-dax- > flush.conf > echo "alias nd:t5* $new_driver" >> /etc/modprobe.d/virt-dax- > flush.conf