Not sure I understand the 'blocking' problem in this case.
We can build a bvec table from this req, and send them all
in send(),
I would like to avoid growing bvec tables and keep everything
preallocated. Plus, a bvec_iter operates on a bvec which means
we'll need a table there as well... Not liking it so far...
can this way avoid your blocking issue? You may see this
example in branch 'rq->bio != rq->biotail' of lo_rw_aio().
This is exactly an example of not ignoring the bios...
If this way is what you need, I think you are right, even we may
introduce the following helpers:
rq_for_each_bvec()
rq_bvecs()
I'm not sure how this helps me either. Unless we can set a bvec_iter to
span bvecs or have an abstract bio crossing when we re-initialize the
bvec_iter I don't see how I can ignore bios completely...
So looks nvme-tcp host driver might be the 2nd driver which benefits
from multi-page bvec directly.
The multi-page bvec V11 has passed my tests and addressed almost
all the comments during review on V10. I removed bio_vecs() in V11,
but it won't be big deal, we can introduce them anytime when there
is the requirement.
multipage-bvecs and nvme-tcp are going to conflict, so it would be good
to coordinate on this. I think that nvme-tcp host needs some adjustments
as setting a bvec_iter. I'm under the impression that the change is
rather small and self-contained, but I'm not sure I have the full
picture here.
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel