On Tue, Jun 15, 2021 at 9:18 PM David Laight <David.Laight@xxxxxxxxxx> wrote: > > From: Bin Meng > > Sent: 15 June 2021 14:09 > > > > On Tue, Jun 15, 2021 at 4:57 PM David Laight <David.Laight@xxxxxxxxxx> wrote: > > > > ... > > > I'm surprised that the C loop: > > > > > > > + for (; count >= bytes_long; count -= bytes_long) > > > > + *d.ulong++ = *s.ulong++; > > > > > > ends up being faster than the ASM 'read lots' - 'write lots' loop. > > > > I believe that's because the assembly version has some unaligned > > access cases, which end up being trap-n-emulated in the OpenSBI > > firmware, and that is a big overhead. > > Ah, that would make sense since the asm user copy code > was broken for misaligned copies. > I suspect memcpy() was broken the same way. > Yes, Gary Guo sent one patch long time ago against the broken assembly version, but that patch was still not applied as of today. https://patchwork.kernel.org/project/linux-riscv/patch/20210216225555.4976-1-gary@xxxxxxxxxxx/ I suggest Matteo re-test using Gary's version. > I'm surprised IP_NET_ALIGN isn't set to 2 to try to > avoid all these misaligned copies in the network stack. > Although avoiding 8n+4 aligned data is rather harder. > > Misaligned copies are just best avoided - really even on x86. > The 'real fun' is when the access crosses TLB boundaries. Regards, Bin