On Thu, Mar 23, 2023 at 10:16:12PM +0000, David Laight wrote: > From: Mark Rutland > > Sent: 22 March 2023 14:05 > .... > > > IIUC, in such tests you only vary the destination offset. Our copy > > > routines in general try to align the source and leave the destination > > > unaligned for performance. It would be interesting to add some variation > > > on the source offset as well to spot potential issues with that part of > > > the memcpy routines. > > > > I have that on my TODO list; I had intended to drop that into the > > usercopy_params. The only problem is that the cross product of size, > > src_offset, and dst_offset gets quite large. > > I thought that is was better to align the writes and do misaligned reads. We inherited the memcpy/memset routines from the optimised cortex strings library (fine-tuned by the toolchain people for various Arm microarchitectures). For some CPUs with less aggressive prefetching it's probably marginally faster to align the reads instead of writes (as multiple unaligned writes are usually combined in the write buffer somewhere). Also, IIRC for some small copies (less than 16 bytes), our routines don't bother with any alignment at all. > Although maybe copy_to/from_user() would be best aligning the user address > (to avoid page faults part way through a misaligned access). In theory only copy_to_user() needs the write aligned if we want strict guarantees of what was written. For copy_from_user() we can work around by falling back to a byte read. > OTOH, on x86, is it even worth bothering at all. > I have measured a performance drop for misaligned reads, but it > was less than 1 clock per cache line in a test that was doing > 2 misaligned reads in at least some of the clock cycles. > I think the memory read path can do two AVX reads each clock. > So doing two misaligned 64bit reads isn't stressing it. I think that's what Mark found as well in his testing, though I'm sure one can build a very specific benchmark that shows a small degradation. -- Catalin