Hi, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> writes: > On Thu, 29 Oct 2015, Matthew Dharm wrote: > >> Uhh... I think this is a bad idea. The 120K limit was chosen based on >> seeing a lot of devices which broke if you tried to transfer more. At >> least, that's my memory. Otherwise, it's a really goofy number. >> >> I certainly wouldn't mind you increasing it in the case of a USB 3.x >> device, but not globally. > > That's what I remember also. Besides, we have a sysfs interface for > changing this value, so the number in the driver doesn't have to be > permanent for every device. We can't really expect users to fiddle with sysfs to get better throughput, right ? :-) Moreover, the same way that we can use sysfs to increase max_sectors, we can use it to decrease everytime we find a broken device. I really wonder if they still are available in the market. So, wouldn't a better default be 2048 and we decrease this as quirk flags when people complain about regressions ? I mean, 20~25% improvement is quite considerable > In fact, there are quite a few USB storage devices which break if you > try to transfer more than 64 KB at a time. That's what Windows uses by > default (although maybe they use a larger value for USB-3 devices). I have no idea what MS uses as default so I'll just take your word for it, unless you have some evidence of this statement somewhere. I just tested this with 3 other devices I have around and they behaved with either setting. I wonder if the original breakage could've been due to a bug in the SCSI layer which has long been fixed ? I mean, how many years ago are we talking about here ? 10 ? cheers -- balbi
Attachment:
signature.asc
Description: PGP signature