random(4) overheads question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm working on a demon that collects timer randomness, distills it
some, and pushes the results into /dev/random.

My code produces the random material in 32-bit chunks. The current
version sends it to /dev/random 32 bits at a time, doing a write() and
an entropy-update ioctl() for each chunk. Obviously I could add some
buffering and write fewer and larger chunks. My questions are whether
that is worth doing and, if so, what the optimum write() size is
likely to be.

I am not overly concerned about overheads on my side of the interface,
unless they are quite large. My concern is whether doing many small
writes wastes kernel resources.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux