Re: [PATCH] crypto: x86: Do not acquire fpu context for too long

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 4 Oct 2022 at 17:17, Elliott, Robert (Servers) <elliott@xxxxxxx> wrote:
>
>
>
> > -----Original Message-----
> > From: Taehee Yoo <ap420073@xxxxxxxxx>
> > Sent: Tuesday, October 4, 2022 1:02 AM
> > To: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
> > Cc: linux-crypto@xxxxxxxxxxxxxxx; davem@xxxxxxxxxxxxx; tglx@xxxxxxxxxxxxx;
> > mingo@xxxxxxxxxx; bp@xxxxxxxxx; dave.hansen@xxxxxxxxxxxxxxx; x86@xxxxxxxxxx;
> > hpa@xxxxxxxxx; ardb@xxxxxxxxxx; ebiggers@xxxxxxxxxx
> > Subject: Re: [PATCH] crypto: x86: Do not acquire fpu context for too long
> >
> > Hi Herbert,
> > Thanks a lot for your review!
> >
> > On 10/4/22 13:52, Herbert Xu wrote:
> >  > On Tue, Oct 04, 2022 at 04:49:12AM +0000, Taehee Yoo wrote:
> >  >>
> >  >>   #define ECB_WALK_START(req, bsize, fpu_blocks) do {                     \
> >  >>           void *ctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(req));   \
> >  >> + unsigned int walked_bytes = 0;                                  \
> >  >>           const int __bsize = (bsize);                                    \
> >  >>           struct skcipher_walk walk;                                      \
> >  >> - int err = skcipher_walk_virt(&walk, (req), false);              \
> >  >> + int err;                                                        \
> >  >> +                                                                 \
> >  >> + err = skcipher_walk_virt(&walk, (req), false);                  \
> >  >>           while (walk.nbytes > 0) {                                       \
> >  >> -         unsigned int nbytes = walk.nbytes;                      \
> >  >> -         bool do_fpu = (fpu_blocks) != -1 &&                     \
> >  >> -                       nbytes >= (fpu_blocks) * __bsize;         \
> >  >>                   const u8 *src = walk.src.virt.addr;                     \
> >  >> -         u8 *dst = walk.dst.virt.addr;                           \
> >  >>                   u8 __maybe_unused buf[(bsize)];                         \
> >  >> -         if (do_fpu) kernel_fpu_begin()
> >  >> +         u8 *dst = walk.dst.virt.addr;                           \
> >  >> +         unsigned int nbytes;                                    \
> >  >> +         bool do_fpu;                                            \
> >  >> +                                                                 \
> >  >> +         if (walk.nbytes - walked_bytes > ECB_CBC_WALK_MAX) {    \
> >  >> +                 nbytes = ECB_CBC_WALK_MAX;                      \
> >  >> +                 walked_bytes += ECB_CBC_WALK_MAX;               \
> >  >> +         } else {                                                \
> >  >> +                 nbytes = walk.nbytes - walked_bytes;            \
> >  >> +                 walked_bytes = walk.nbytes;                     \
> >  >> +         }                                                       \
> >  >> +                                                                 \
> >  >> +         do_fpu = (fpu_blocks) != -1 &&                          \
> >  >> +                  nbytes >= (fpu_blocks) * __bsize;              \
> >  >> +         if (do_fpu)                                             \
> >  >> +                 kernel_fpu_begin()
> >  >>
> >  >>   #define CBC_WALK_START(req, bsize, fpu_blocks)                          \
> >  >>           ECB_WALK_START(req, bsize, fpu_blocks)
> >  >> @@ -65,8 +81,12 @@
> >  >>   } while (0)
> >  >>
> >  >>   #define ECB_WALK_END()                                                  \
> >  >> -         if (do_fpu) kernel_fpu_end();                           \
> >  >> +         if (do_fpu)                                             \
> >  >> +                 kernel_fpu_end();                               \
> >  >> +         if (walked_bytes < walk.nbytes)                         \
> >  >> +                 continue;                                       \
> >  >>                   err = skcipher_walk_done(&walk, nbytes);                \
> >  >> +         walked_bytes = 0;                                       \
> >  >>           }                                                               \
> >  >>           return err;                                                     \
> >  >>   } while (0)
> >  >
> >  > skcipher_walk_* is supposed to return at most a page.  Why is this
> >  > necessary?
> >  >
> >  > Cheers,
> >
> > I referred to below link.
> > https://lore.kernel.org/all/MW5PR84MB18426EBBA3303770A8BC0BDFAB759@MW5PR84MB18
> > 42.NAMPRD84.PROD.OUTLOOK.COM/
> >
> > Sorry for that I didn't check that skcipher_walk_* returns only under 4K
> > sizes.
> > So, I thought fpu context would be too long.
> > But, I just checked the skcipher_walk_*, and it's right, it returns
> > under 4K sizes.
> > So, there are no problems as you mentioned.
> >
> > Thank you so much!
> > Taehee Yoo
>
> I think functions using the ECB and CBC macros (and
> those helper functions) are safe  - notice the third
> argument is called fpu_blocks. So, Aria's ECB mode is
> probably safe. There are no CTR macros, so that needs
> to be checked more carefully.
>
> We need to check all the functions that don't use the
> macros and functions. The SHA functions (I've started
> working on a patch for those).
>
> Running modprobe tcrypt mode=0, I encountered RCU stalls
> on these:
>         tcrypt: testing encryption speed of sync skcipher cts(cbc(aes)) using cts(cbc(aes-aesni))
>         tcrypt: testing encryption speed of sync skcipher cfb(aes) using cfb(aes-aesni)
>
> aesni-intel_glue.c registers "__cts(cbs(aes))", not "cts(cbc(aes-aesni)",
> and doesn't register any cfb algorithms, so those tests are using the
> generic templates, which must not be breaking up the loops as needed.
>

These all use the aes-aesni cipher wrapped in various layers of
generic code. The core cipher puts kernel_fpu_begin/end around every
AES block {16 bytes) it processes, so I doubt that the crypto code is
at fault here, unless the issues is in tcrypt itself.



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux