Re: [PATCH] hwrng: core - Add WARN_ON for buggy read return values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon Sep 23, 2024 at 11:07 AM EEST, Jarkko Sakkinen wrote:
> On Mon Sep 23, 2024 at 10:52 AM EEST, Jarkko Sakkinen wrote:
> > On Mon Sep 23, 2024 at 9:05 AM EEST, Herbert Xu wrote:
> > > Dear TPM maintainers:
> >
> > There's really only just me (for past 10 years). Maybe that should be
> > updatred.
> >
> > >
> > > Please have a look at the tpm hwrng driver because it appears to
> > > be returning a length longer than the buffer length that we gave
> > > it.  In particular, tpm2 appears to be the culprit (though I didn't
> > > really check tpm1 at all so it could also be buggy).
> > >
> > > The following patch hopefully should confirm that this is indeed
> > > caused by TPM and not some other HWRNG driver.
> >
> >
> >
> >
> > >
> > > ---8<---
> > > If a buggy driver returns a length that is longer than the size
> > > of the buffer provided to it, then this may lead to a buffer overread
> > > in the caller.
> > >
> > > Stop this by adding a check for it in the hwrng core.
> > >
> > > Reported-by: Guangwu Zhang <guazhang@xxxxxxxxxx>
> > > Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
> > >
> > > diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
> > > index 57c51efa5613..018316f54621 100644
> > > --- a/drivers/char/hw_random/core.c
> > > +++ b/drivers/char/hw_random/core.c
> > > @@ -181,8 +181,15 @@ static inline int rng_get_data(struct hwrng *rng, u8 *buffer, size_t size,
> > >  	int present;
> > >  
> > >  	BUG_ON(!mutex_is_locked(&reading_mutex));
> > > -	if (rng->read)
> > > -		return rng->read(rng, (void *)buffer, size, wait);
> > > +	if (rng->read) {
> > > +		int err;
> > > +
> > > +		err = rng->read(rng, buffer, size, wait);
> > > +		if (WARN_ON_ONCE(err > 0 && err > size))
> >
> > Are you sure you want to use WARN_ON_ONCE here instead of
> > pr_warn_once()? I.e. is it worth of taking down the whole
> > kernel?
>
> I looked at tpm2_get_random() and it is pretty inefficient (not same
> as saying it has a bug). I'd love to reimplement it.
>
> We would be better of it would pull random let's say with 256 byte
> or 512 byte chunks and cache that internal to tpm_chip. Then the
> requests would be served from that pre-fetched pool and not do
> round-trip to TPM every single time.
>
> This would improve overall kernel performance given the reduced
> number of wait states. I wonder if anyone knows what would be a
> good fetch size that will work on most TPM2 chips?

Herbert, I'm going to go to gym but I could help you to debug the
issue you're seeing with a patch from tpm2_get_random(). We need
to fix that one first ofc (as you never should build on top of
broken code). Once back from gym and grocery shopping I'll bake
a debugging patch.

BR, Jarkko





[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux