On Thu, 2017-04-06 at 13:59 -0700, Dan Williams wrote: > Before we rework the "pmem api" to stop abusing __copy_user_nocache() > for memcpy_to_pmem() we need to fix cases where we may strand dirty > data in the cpu cache. The problem occurs when copy_from_iter_pmem() > is used for arbitrary data transfers from userspace. There is no > guarantee that these transfers, performed by dax_iomap_actor(), will > have aligned destinations or aligned transfer lengths. Backstop the > usage __copy_user_nocache() with explicit cache management in these > unaligned cases. > > Yes, copy_from_iter_pmem() is now too big for an inline, but > addressing that is saved for a later patch that moves the entirety of > the "pmem api" into the pmem driver directly. The change looks good to me. Should we also avoid cache flushing in the case of size=4B & dest aligned by 4B? Thanks, -Toshi