On Wed, Jun 12, 2024 at 05:31:17PM +0800, Herbert Xu wrote: > On Mon, Jun 10, 2024 at 08:48:22PM -0700, Eric Biggers wrote: > > > > + if (++io->num_pending == v->mb_max_msgs) { > > + r = verity_verify_pending_blocks(v, io, bio); > > + if (unlikely(r)) > > + goto error; > > } > > What is the overhead if you just let it accumulate as large a > request as possible? We should let the underlying algorithm decide > how to divide this up in the most optimal fashion. > The queue adds 144*num_messages bytes to each bio. It's desirable to keep this memory overhead down. So it makes sense to limit the queue length to the multibuffer hashing interleaving factor. Yes we could build something where you could get a marginal performance benefit from amounts higher than that by saving indirect calls, but I think it wouldn't be worth bloating the per-IO memory. Another thing to keep in mind is that with how the dm-verity code is currently structured, for each data block it gets the wanted hash from the Merkle tree (which it prefetched earlier) before hashing the data block. So I also worry that if we wait too long before starting to hash the data blocks, dm-verity will spend more time unnecessarily blocked on waiting for Merkle tree I/O. - Eric