On Fri, 23 Jul 2010, Shivdas Gujare wrote: > Hi all, > > I am currently trying to implement a vendor specific gadget driver( > bInterfaceClass = 0xFF, enumerating as bulk In/Out device), Basic task > of this driver is to send around 500K Bytes of data over bulk In > endpoint. > Since, it enumerates as vendor specific device, I have wrote a small > libusb application on Linux Host side which can talk to my gadget > driver. > > So, currently I am designing how many bytes can be transfered over > bulk In endpoint in one attempt.(I understand, wMaxPacketSize for bulk > In endpoint is 512 bytes). I tried to investigate using Pendrive, and > it shows following output for usbmon, > > ffff880090667cc0 3741341493 S Bi:2:028:2 -115 32768 < > ffff880090667cc0 3741343608 C Bi:2:028:2 0 32768 = d6672e3f d3fc537e > 3d2cc447 c0315740 8d1affdc 34984099 81fbbb67 15b40c4f > > which indicates as it has transfered 32KBytes of data in one attempt. > > So, I would like to know how & who does takes care of 512 byte > fragmentation for this bulkIn endpoint, and Do I need to loop > (500K/512) number of times in gadget driver for transferring whole > 500K data onto bulk In endpoint with wMaxPacketSize=512 ? You probably will have to loop, since each transfer requires a contiguous data buffer. It is quite likely that the gadget driver will not be able to allocate a single 500-KB buffer of contiguous memory, so you will have to use multiple smaller buffers and hence multiple smaller transfers. But a buffer certainly can be larger than 512 bytes, so the number of loops can be a lot smaller than (500K/512). > more specifically, how should I decide, what should be the value of > "size" in following implementation. > req->length = size; > ret = usb_ep_queue(dev->ep_in, req, GFP_ATOMIC); > > Thanks lot for any help. It should be the size of your data buffer. It's generally not a good idea to allocate buffers that are more than a few pages long. Alan Stern -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html