RE: [PATCH v4 10/24] crypto: x86/poly - limit FPU preemption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Jason A. Donenfeld <Jason@xxxxxxxxx>
> Sent: Wednesday, November 16, 2022 5:14 AM
> To: Elliott, Robert (Servers) <elliott@xxxxxxx>
> Cc: herbert@xxxxxxxxxxxxxxxxxxx; davem@xxxxxxxxxxxxx;
> tim.c.chen@xxxxxxxxxxxxxxx; ap420073@xxxxxxxxx; ardb@xxxxxxxxxx;
> David.Laight@xxxxxxxxxx; ebiggers@xxxxxxxxxx; linux-
> crypto@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH v4 10/24] crypto: x86/poly - limit FPU preemption
> 
> On Tue, Nov 15, 2022 at 10:13:28PM -0600, Robert Elliott wrote:
> > +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */
> > +static const unsigned int bytes_per_fpu = 337 * 1024;
> > +
> 
> Use an enum for constants like this:
> 
>     enum { BYTES_PER_FPU = ... };
> 
> You can even make it function-local, so it's near the code that uses it,
> which will better justify its existence.

Using crc32c-intel as an example, the gcc 12.2.1 assembly output is the
same for:

1. const variable per-module
	static const unsigned int bytes_per_fpu = 868 * 1024;
2. enum per-module
	enum { bytes_per_fpu = 868 * 1024 } ;
3. enum per function (_update and _finup)
	enum { bytes_per_fpu = 868 * 1024 } ;

Each function gets a movl instruction with that constant, and has the
same compare, add, subtract, and jump instructions.

# ../arch/x86/crypto/crc32c-intel_glue.c:198:                   unsigned int chunk = min(len, bytes_per_fpu);
        movl    $888832, %eax   #, tmp127

# ../arch/x86/crypto/crc32c-intel_glue.c:171:                   unsigned int chunk = min(len, bytes_per_fpu);
        movl    $888832, %r13d  #, tmp128

Since enum doesn't guarantee any particular type, those variations
upset the min() macro. min_t() is necessary to eliminate the
compiler warning.

../arch/x86/crypto/crc32c-intel_glue.c: In function ‘crc32c_pcl_intel_update’:
../arch/x86/crypto/crc32c-intel_glue.c:171:97: warning: comparison of distinct pointer types lacks a cast
  171 |                         unsigned int chunk = min(len, bytes_per_fpu);

> Also, where did you get this number? Seems kind of weird.

As described in replies on the v2 patches, I created a tcrypt test that
runs each algorithm on a 1 MiB buffer with no loop limits (and irqs
disabled),  picks the best result out of 10 passes, and calculates the
number of bytes that nominally fit in 30 us (on a 2.2 GHz Cascade
Lake CPU).

Actual results with those values vary from 37 to 102 us; it is much
better than running unlimited, but still imperfect.

https://lore.kernel.org/lkml/MW5PR84MB184284FBED63E2D043C93A6FAB369@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/

The hash algorithms seem to congregate around three speeds:
- slow: 10 to 20 KiB for sha* and sm3
- medium: 200 to 400 KiB for poly*
- fast: 600 to 800 KiB for crc*

so it might be preferable to just go with three values (e.g., 16, 256, and
512 KiB). There's a lot of variability in CPU architecture, CPU speeds, and
other system activity that make this impossible to perfect.

It'd be ideal if the loops checked the CPU cycle count rather than
worked on a byte count, but the RDTSC instruction is notoriously slow and
would slow down overall performance.

The RAID6 library supports running a benchmark during each boot to pick
the best implementation to use (not always enabled):
[    3.341372] raid6: skipped pq benchmark and selected avx512x4

The crypto subsystem does something similar with its self-tests, which could
be expanded to include speed tests to tune the loop values. However, that 
slows down boot and could be misled by an NMI or SMI during the test, which
could lead to even worse results.

The selected values could be kept in each file or put in a shared .h file.
Both approaches seem difficult to maintain.

The powerpc modules have paragraph-sized comments explaining their MAX_BYTES
macros (e.g., in arch/powerpc/crypto/sha256-spe-glue.c) that might be a good
model for documenting the values:
	 * MAX_BYTES defines the number of bytes that are allowed to be processed
	 * between preempt_disable() and preempt_enable(). SHA256 takes ~2,000
	 * operations per 64 bytes. e500 cores can issue two arithmetic instructions
	 * per clock cycle using one 32/64 bit unit (SU1) and one 32 bit unit (SU2).
	 * Thus 1KB of input data will need an estimated maximum of 18,000 cycles.
	 * Headroom for cache misses included. Even with the low end model clocked
	 * at 667 MHz this equals to a critical time window of less than 27us.



> >  asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t
> message_len,
> >  			u8 hash[NH_HASH_BYTES]);
> >
> > @@ -26,18 +29,20 @@ static void _nh_avx2(const u32 *key, const u8
> *message, size_t message_len,
> >  static int nhpoly1305_avx2_update(struct shash_desc *desc,
> >  				  const u8 *src, unsigned int srclen)
> >  {
> > +	BUILD_BUG_ON(bytes_per_fpu == 0);
> 
> Make the constant function local and remove this check.

That just makes sure someone editing the source code doesn't pick a value that
will cause the loops to hang; it's stronger than a comment saying "don't set
this to 0". It's only a compile-time check, and doesn't result in any
the assembly language output that I can see.
 
> > +7
> >  	if (srclen < 64 || !crypto_simd_usable())
> >  		return crypto_nhpoly1305_update(desc, src, srclen);
> >
> > -	do {
> > -		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
> > +	while (srclen) {
> 
> Does this add a needless additional check or does it generate better
> code? Would be nice to have some explanation of the rationale.

Each module's assembly function can have different handling of
- length 0
- length < block size
- length < some minimum length
- length < a performance switchover point
- length not a multiple of block size
- current buffer pointer not aligned to block size
Sometimes the C glue logic checks values upfront; sometimes
it doesn't.

The while loops help get them to follow one of two patterns:
	while (length)
or
	while (length >= BLOCK_SIZE)

and sidestep some of the special handling concerns.

Performance-wise, the patches are either 
- adding lots of kernel_fpu_begin() and kernel_fpu_end() calls 
  (all the ones that were running unlimited)
- removing lots of kernel_fpu_begin() and kernel_fpu_end() calls
  (e.g., polyval relaxed from 4 KiB to 383 KiB)

which is much more impactful than the common while loop entry.

I created tcrypt tests that try lengths around all the special
values like 0, 16, 4096, and the selected bytes per FPU size
(comparing results to the generic algorithms like the extended
self-tests), so I think these loops are functionally correct
(I've found that violating the undocumented assumptions of the
assembly functions is a good way to exercise RCU stall
reporting).







[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux