Re: [RFC/RFT PATCH 09/18] crypto: streebog - fix unaligned memory accesses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 02, 2019 at 07:15:57PM +0300, Vitaly Chikunov wrote:
> > >  
> > >  static void streebog_stage2(struct streebog_state *ctx, const u8 *data)
> > >  {
> > > -	streebog_g(&ctx->h, &ctx->N, data);
> > > +	struct streebog_uint512 m;
> > > +
> > > +	memcpy(&m, data, sizeof(m));
> > > +
> > > +	streebog_g(&ctx->h, &ctx->N, &m);
> > >  
> > >  	streebog_add512(&ctx->N, &buffer512, &ctx->N);
> > > -	streebog_add512(&ctx->Sigma, (const struct streebog_uint512 *)data,
> > > -			&ctx->Sigma);
> > > +	streebog_add512(&ctx->Sigma, &m, &ctx->Sigma);
> > >  }
> > 
> > As I understand, this is the actual fix.
> 
> Probably, even better would be to use CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> to optimize out memcpy() for such architectures.
> 

Having multiple code paths is more error-prone, and contrary to popular belief
you can't break alignment rules without informing the compiler, even when
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS.  See
https://patchwork.kernel.org/cover/10631429/.

If you want to code up something yourself using get_unaligned_le64() or
__attribute__((packed)), that probably would be the way to go.  But for now I
just want to fix it to not cause a test failure.  I don't have any particular
interest in optimizing Streebog myself, especially the C implementation (if you
really cared about performance you'd add an assembly implementation).

- Eric



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux