On Wed, Oct 27, 2021 at 01:40:47AM +0000, Chen Huang wrote: > From: Robin Murphy <robin.murphy@xxxxxxx> > > commit 295cf156231ca3f9e3a66bde7fab5e09c41835e0 upstream. > > Al reminds us that the usercopy API must only return complete failure > if absolutely nothing could be copied. Currently, if userspace does > something silly like giving us an unaligned pointer to Device memory, > or a size which overruns MTE tag bounds, we may fail to honour that > requirement when faulting on a multi-byte access even though a smaller > access could have succeeded. > > Add a mitigation to the fixup routines to fall back to a single-byte > copy if we faulted on a larger access before anything has been written > to the destination, to guarantee making *some* forward progress. We > needn't be too concerned about the overall performance since this should > only occur when callers are doing something a bit dodgy in the first > place. Particularly broken userspace might still be able to trick > generic_perform_write() into an infinite loop by targeting write() at > an mmap() of some read-only device register where the fault-in load > succeeds but any store synchronously aborts such that copy_to_user() is > genuinely unable to make progress, but, well, don't do that... > > CC: stable@xxxxxxxxxxxxxxx > Reported-by: Chen Huang <chenhuang5@xxxxxxxxxx> > Suggested-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> > Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> > Signed-off-by: Robin Murphy <robin.murphy@xxxxxxx> > Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@xxxxxxx > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > Signed-off-by: Chen Huang <chenhuang5@xxxxxxxxxx> > --- > arch/arm64/lib/copy_from_user.S | 13 ++++++++++--- > arch/arm64/lib/copy_in_user.S | 21 ++++++++++++++------- > arch/arm64/lib/copy_to_user.S | 14 +++++++++++--- > 3 files changed, 35 insertions(+), 13 deletions(-) Both now queued up, thanks. greg k-h