On Tue, Jun 14 2016 at 9:39pm -0400, Martin K. Petersen <martin.petersen@xxxxxxxxxx> wrote: > >>>>> "Hannes" == Hannes Reinecke <hare@xxxxxxx> writes: > > Hannes> Well, the primary issue is that 'blk_cloned_rq_check_limits()' > Hannes> doesn't check for BLOCK_PC, > > Yes it does. It calls blk_rq_get_max_sectors() which has an explicit > check for this: > > static inline unsigned int blk_rq_get_max_sectors(struct request *rq) > { > struct request_queue *q = rq->q; > > if (unlikely(rq->cmd_type != REQ_TYPE_FS)) > return q->limits.max_hw_sectors; > [...] > > Hannes> The max_segments count, OTOH, _might_ change during failover > Hannes> (different hardware has different max_segments setting, and this > Hannes> is being changed during sg mapping), so there is some value to > Hannes> be had from testing it here. > > Oh, this happens during failover? Are you sure it's not because DM is > temporarily resetting the queue limits? max_sectors is going to be a > single page in that case. I just discussed a backport regression in this > department with Mike at LSF/MM. But that was for an older kernel. Not aware of any limits reset issue now... > Accidentally resetting the limits during table swaps has happened a > couple of times over the years. We trip it instantly with the database > in failover testing. But feel free to throw your DB in failover tests (w/ dm-mpath) at a recent kernel ;) -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html