On Tue, May 9, 2017 at 10:00 AM, Ben Hutchings <ben.hutchings@xxxxxxxxxxxxxxx> wrote: > On Tue, 2017-04-25 at 16:08 +0100, Greg Kroah-Hartman wrote: >> 4.4-stable review patch. If anyone has any objections, please let me know. >> >> ------------------ >> >> From: Dan Williams <dan.j.williams@xxxxxxxxx> >> >> commit 11e63f6d920d6f2dfd3cd421e939a4aec9a58dcd upstream. > [...] >> + if (iter_is_iovec(i)) { >> + unsigned long flushed, dest = (unsigned long) addr; >> + >> + if (bytes < 8) { >> + if (!IS_ALIGNED(dest, 4) || (bytes != 4)) >> + __arch_wb_cache_pmem(addr, 1); > [...] > > What if the write crosses a cache line boundary? I think you need the > following fix-up (untested, I don't have this kind of hardware). > > Ben. > > --- > From: Ben Hutchings <ben.hutchings@xxxxxxxxxxxxxxx> > Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes > > Commit 11e63f6d920d added cache flushing for unaligned writes from an > iovec, covering the first and last cache line of a >= 8 byte write and > the first cache line of a < 8 byte write. But an unaligned write of > 2-7 bytes can still cover two cache lines, so make sure we flush both > in that case. > > Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...") > Signed-off-by: Ben Hutchings <ben.hutchings@xxxxxxxxxxxxxxx> > --- > arch/x86/include/asm/pmem.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h > index d5a22bac9988..0ff8fe71b255 100644 > --- a/arch/x86/include/asm/pmem.h > +++ b/arch/x86/include/asm/pmem.h > @@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, > > if (bytes < 8) { > if (!IS_ALIGNED(dest, 4) || (bytes != 4)) > - arch_wb_cache_pmem(addr, 1); > + arch_wb_cache_pmem(addr, bytes); Yes, this looks correct to me. I deeply appreciate your attention to detail, Ben.