Re: [PATCH] x86/clear_user: Make it faster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 22, 2022 at 04:07:19PM -0500, Linus Torvalds wrote:
> Might I suggest just using "count=XYZ" to make the sizes the same and
> the numbers a but more comparable? Because when I first looked at the
> numbers I was like "oh, the first one finished in 17s, the second one
> was three times slower!

Yah, I got confused too but then I looked at the rate...

But it looks like even this microbenchmark is hm, well, showing that
there's more than meets the eye. Look at at rates:

for i in $(seq 1 10); do dd if=/dev/zero of=/dev/null bs=1M status=progress count=65536; done 2>&1 | grep copied
32207011840 bytes (32 GB, 30 GiB) copied, 1 s, 32.2 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.93069 s, 35.6 GB/s
37597741056 bytes (38 GB, 35 GiB) copied, 1 s, 37.6 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.78017 s, 38.6 GB/s
62020124672 bytes (62 GB, 58 GiB) copied, 2 s, 31.0 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 2.13716 s, 32.2 GB/s
60010004480 bytes (60 GB, 56 GiB) copied, 1 s, 60.0 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.14129 s, 60.2 GB/s
53212086272 bytes (53 GB, 50 GiB) copied, 1 s, 53.2 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.28398 s, 53.5 GB/s
55698259968 bytes (56 GB, 52 GiB) copied, 1 s, 55.7 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.22507 s, 56.1 GB/s
55306092544 bytes (55 GB, 52 GiB) copied, 1 s, 55.3 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.23647 s, 55.6 GB/s
54387539968 bytes (54 GB, 51 GiB) copied, 1 s, 54.4 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.25693 s, 54.7 GB/s
50566529024 bytes (51 GB, 47 GiB) copied, 1 s, 50.6 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.35096 s, 50.9 GB/s
58308165632 bytes (58 GB, 54 GiB) copied, 1 s, 58.3 GB/s
68719476736 bytes (69 GB, 64 GiB) copied, 1.17394 s, 58.5 GB/s

Now the same thing with smaller buffers:

for i in $(seq 1 10); do dd if=/dev/zero of=/dev/null bs=1M status=progress count=8192; done 2>&1 | grep copied 
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.28485 s, 30.2 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.276112 s, 31.1 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.29136 s, 29.5 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.283803 s, 30.3 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.306503 s, 28.0 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.349169 s, 24.6 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.276912 s, 31.0 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.265356 s, 32.4 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.28464 s, 30.2 GB/s
8589934592 bytes (8.6 GB, 8.0 GiB) copied, 0.242998 s, 35.3 GB/s

So it is all magically alignment, microcode "activity" and other
planets-alignment things of the uarch.

It is not getting even close to 50GB/s with larger block sizes - 4M in
this case:

dd if=/dev/zero of=/dev/null bs=4M status=progress count=65536
249334595584 bytes (249 GB, 232 GiB) copied, 10 s, 24.9 GB/s
65536+0 records in
65536+0 records out
274877906944 bytes (275 GB, 256 GiB) copied, 10.9976 s, 25.0 GB/s

so it is all a bit: "yes, we can go faster, but <do all those
requirements first>" :-)

> But yes, apparently that "rep stos" is *much* better with that /dev/zero test.
> 
> That does imply that what it does is to avoid polluting some cache
> hierarchy, since your 'dd' test case doesn't actually ever *use* the
> end result of the zeroing.
> 
> So yeah, memset and memcpy are just fundamentally hard to benchmark,
> because what matters more than the cost of the op itself is often how
> the end result interacts with the code around it.

Yap, and this discarding of the end result is silly but well...

> For example, one of the things that I hope FSRM really does well is
> when small copies (or memsets) are then used immediately afterwards -
> does the just stored data by the microcode get nicely forwarded from
> the store buffers (like it would if it was a loop of stores) or does
> it mean that the store buffer is bypassed and subsequent loads will
> then hit the L1 cache?
> 
> That is *not* an issue in this situation, since any clear_user() won't
> be immediately loaded just a few instructions later, but it's
> traditionally an issue for the "small memset/memcpy" case, where the
> memset/memcpy destination is possibly accessed immediately afterwards
> (either to make further modifications, or to just be read).

Right.

> In a perfect world, you get all the memory forwarding logic kicking
> in, which can really shortcircuit things on an OoO core and take the
> memory pipeline out of the critical path, which then helps IPC.
> 
> And that's an area that legacy microcoded 'rep stosb' has not been
> good at. Whether FSRM is quite there yet, I don't know.

Right.

> (Somebody could test: do a 'store register to memory', then to a
> 'memcpy()' of that memory to another memory area, and then do a
> register load from that new area - at least in _theory_ a very
> aggressive microarchitecture could actually do that whole forwarding,
> and make the latency from the original memory store to the final
> memory load be zero cycles.

Ha, that would be an interesting exercise.

Hmm, but but, how would the hardware recognize it is the same data it
has in the cache at that new virtual address?

I presume it needs some smart tracking of cachelines. But smart tracking
costs so it needs to be something that happens a lot in all the insn
traces hw guys look at when thinking of new "shortcuts" to raise IPC. :)

> I know AMD was supposedly doing that for some of the simpler cases,

Yap, the simpler cases are probably easy to track and I guess that's
what the hw does properly and does the forwarding there while for the
more complex ones, it simply does the whole run-around at least to a
lower-level cache if not to mem.

> and it *does* actually matter for real world loads, because that
> memory indirection is often due to passing data in structures as
> function arguments. So it sounds stupid to store to memory and then
> immediately load it again, but it actually happens _all_the_time_ even
> for smart software).

Right.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux