Hi, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> writes: >> Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> writes: >> > On Thu, 29 Oct 2015, Matthew Dharm wrote: >> > >> >> Uhh... I think this is a bad idea. The 120K limit was chosen based on >> >> seeing a lot of devices which broke if you tried to transfer more. At >> >> least, that's my memory. Otherwise, it's a really goofy number. >> >> >> >> I certainly wouldn't mind you increasing it in the case of a USB 3.x >> >> device, but not globally. >> > >> > That's what I remember also. Besides, we have a sysfs interface for >> > changing this value, so the number in the driver doesn't have to be >> > permanent for every device. >> >> We can't really expect users to fiddle with sysfs to get better >> throughput, right ? :-) > > That's what udev scripts are for. > >> Moreover, the same way that we can use sysfs to increase max_sectors, we >> can use it to decrease everytime we find a broken device. I really >> wonder if they still are available in the market. So, wouldn't a better >> default be 2048 and we decrease this as quirk flags when people complain >> about regressions ? I mean, 20~25% improvement is quite considerable > > I don't know about current availability. The decision to use 120 KB as > the default was made a long time ago, but it wouldn't be surprising if apparently back in 2003 [1] > some of the devices available back then are still in use. You wouldn't > want to cause a regression, would you? :-) My point is that it might be better to find out which device(s) is quirky because that list might be a lot smaller than the list of non-quirky devices. Looking at the CBW structure, I have 16 bits for telling the gadget how many sectors I want, when we limit to 240 sectors, we're only using 8 bits of those :-p If I can find one of these old devices, I'd certainly try to, at least, Document the reasons why we chose this, apparently arbitrary, 240 sectors limit. >> > In fact, there are quite a few USB storage devices which break if you >> > try to transfer more than 64 KB at a time. That's what Windows uses by >> > default (although maybe they use a larger value for USB-3 devices). >> >> I have no idea what MS uses as default so I'll just take your word for >> it, unless you have some evidence of this statement somewhere. > > It's easy to prove for yourself. Enable VERBOSE_DEBUG in > storage_common.h and plug the resulting mass-storage gadget into a > Windows machine. Then check the kernel log to see how large the read /me goes look for a windows machine > and write transfers get. That's how I learned about the Windows > limit. and that happened how long ago ? :-) >> I just tested this with 3 other devices I have around and they behaved >> with either setting. I wonder if the original breakage could've been due >> to a bug in the SCSI layer which has long been fixed ? I mean, how many >> years ago are we talking about here ? 10 ? > > Quite a long time. It wasn't a problem in the SCSI layer, though, it > was clearly firmware bugs in the devices. If the problem had been in > the SCSI layer then it would have affected every device, not just > some. Fair enough. Linus, do you have any recollection of which device you found to be quirky ? Your original commit limitting to 240 sectors doesn't make that clear at all ;-) [1] http://marc.info/?l=git-commits-head&m=106945975115131&w=2 -- balbi
Attachment:
signature.asc
Description: PGP signature