> I have fixed this up in the current development tree, it now reads: > > limits->max_hw_sectors = UINT_MAX; > limits->max_sectors = UINT_MAX; > > The right fix is to kill the field entirely and make sure the backends > can handled arbitrary sized requests. Then as a next step kill the whole > task indirection. If it wasn't for the rarely used pscsi backend this > could have easily be done long time ago. Well, that's fine for the backend limits (although I'm not sure I understand why we want to ignore eg the limits of the underlying block device in iblock -- what happens if we submit a bio with BIO_MAX_PAGES to a device driver that only supports say 16 SG entries?). But we still have to report and enforce something sensible for the fabric maximum transfer length (ie the value returned to the initiator in vpd page B0h). Unless we add a bunch of code to the core to do the fabric data movement in multiple pieces, we can't allow arbitrary sized IOs. Right now if someone sends a billion sector read command, we'll happily try to allocate 512 GB of memory to hold our response. My conclusion is that in the short term we need something like a fabric_max_sectors device attribute that is reported in the block limits page, and untangle that usage from the backend max_sectors. - R. -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html