diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4571ef1..b8ff6a3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1388,7 +1388,7 @@ static inline bool bvec_gap_to_prev(struct
request_queue *q,
static inline bool bio_will_gap(struct request_queue *q, struct bio
*prev,
struct bio *next)
{
- if (!bio_has_data(prev))
+ if (!bio_has_data(prev) || !queue_virt_boundary(q))
bio_integrity_add_page return false;
Can we not do that?
Given there are only 3 drivers which set virt boundary, I think
it is reasonable to do that.
3 drivers that are really performance critical. I don't think we
should add optimized branching for some of the drivers especially
when the drivers that do set virt_boundary *really* care about latency.
bvec_gap_to_prev is already checking the virt_boundary and I'd sorta
like to keep the motivation to optimize bio_get_last_bvec() to be O(1).
Currently the approaches I thought of still need to iterate bvec by bvec,
not sure if O(1) can be reached easily, but I am happy to discuss the
optimized implementation.
Me too. Note that I don't mind if the bio split code won't be optimized,
but I do want req_gap_back_merge/req_gap_front_merge to be...
Also, are the bvec_gap_to_prev usages in bio_add_pc_page and
bio_integrity_add_page safe? I didn't test this stuff with integrity
payloads...
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html