On 11/08/2020 16:49, Michael S. Tsirkin wrote: > On Tue, Aug 11, 2020 at 03:53:54PM +0200, Laurent Vivier wrote: >> On 11/08/2020 15:14, Michael S. Tsirkin wrote: >>> On Tue, Aug 11, 2020 at 03:00:14PM +0200, Laurent Vivier wrote: >>>> No problem. This code is tricky and it took me several months to really >>>> start to understand it ... >>> >>> Oh great, we actually have someone who understands the code! >>> Maybe you can help me understand: virtio_read >>> takes the buf pointer and puts it in the vq. >>> It can then return to caller (e.g. on a signal). >>> Device can meanwhile write into the buffer. >>> >>> It looks like if another call then happens, and that >>> other call uses a different buffer, virtio rng >>> will happily return the data written into the >>> original buf pointer, confusing the caller. >>> >>> Is that right? >>> >> >> Yes. >> >> hw_random core uses two bufers: >> >> - rng_fillbuf that is used with a blocking access and protected by the >> reading_mutex. I think this cannot be interrupted by a kill because it's >> in hwrng_fillfn() and it's kthread. >> >> - rng_buffer that is used in rng_dev_read() and can be interrupted (it >> is also protected by reading_mutex) >> >> But if rng_dev_read() is called with O_NONBLOCK or interrupted and then >> rng_fillbuf starts they can be mixed. >> >> We have also the first use of rng_buffer in add_early_randomness() that >> use a different size than in rng_dev_read() with the same buffer (and >> this size is 16 whereas the hwrng read API says it must be at least 32...). >> >> The problem here is core has been developped with synchronicity in mind, >> whereas virtio is asynchronous by definition. >> >> I think we should add some internal buffers in virtio-rng backend. This >> would improve performance (we are at 1 MB/s, I sent a patch to improve >> that, but this doesn't fix the problems above), and allows hw_random >> core to use memory that doesn't need to be compatible with virt_to_page(). >> >> Thanks, >> Laurent > > OK so just add a bunch of 32 bit buffers and pass them to hardware, > as they data gets consumed pass them to hardware again? > > For virtio-rng performance we must ask for the bigger block we can (the size given in rng_dev_read() would be great). But the problem here is not to waste entropy. We should avoid to ask for entropy we don't need. So we can't really enqueue buffer before knowing the size. And if there is no enough entroy to fill the buffer but enough for the use we can be blocked waiting for entropy we don't need. And the change must be done at virtio-rng level, not in core because it's useless for other backends Moreover, the buffer in core will be used with another hw_random backend if the user change the backend while the buffer is in use by virtio-rng. So we really need to copy between virtio-rng buffer and core buffer. I've also propose a change to the virtio entropy device specs to add a command queue and a command to flush the enqueued buffers. The purpose was to be able to remove a blocked device, but it can also be useful in this case. to remove the buffer of an interrupted read. Thanks, Laurent _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization