On Wed, Oct 12, 2011 at 7:38 PM, Alan Stern <stern@xxxxxxxxxxxxxxxxxxx> wrote: > On Wed, 12 Oct 2011, Markus Rechberger wrote: > >> ok here you must miss something with your explanation or bulk analysis. >> The effect is, if the device is set up to low transfer buffer -> high >> cpu load, if it is set to >> bigger transfer buffer low cpu load. We are talking about 30% CPU >> difference on 1.3 ghz. > > I don't see why the size of the transfer buffer should have much impact > on the CPU load. The CPU has to do about the same amount of work per > data byte, regardless of how those bytes are grouped into buffers, > right? > At that stage I'm the wrong one to ask, it's a setting on IC level I can't tell more than I see in the specs and what I see by my experience. If it would be a PCI device I would say the device generates more interrupts with smaller packets which of course ends up that the CPU would go up. > One of the main factors affecting CPU load is how frequently interrupts > occur. If you use small buffer sizes and get interrupted after each > buffer is filled, the overhead will be higher. But you can avoid that > by making the interrupts occur less often: Set the > USBDEVFS_URB_NO_INTERRUPT flag for most of the transfers. > no doesn't work driver reports unless we submit 120 URBs (due small buffers), with adjusted HW registers 2 URBS are enough. 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (0) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (616) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (1128) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (1345) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (2152) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (2385) 2011-10-12 19:56:16 [28117] TS Sync byte not aligned, realigning stream (2664) which means the stream is corrupted. maybe the device picks up all the requests and fill them at once according to the chip settings. >> It does not matter what transfer buffer size the userspace application >> is set to. >> >> According to the specs: >> Available Bulk Transfer Size are: >> - 188*n bytes, where n = 1~256 > > How are these bulk transfers divided into packets? Still with 512 byte boundaries, the rest is not aligned. the transfer buffer is still 512 bytes, this is about device A (which has the more flexible chipset) > Is a single > transfer of 188*n bytes divided up into 188*n/512 packets, each holding > 512 bytes, plus a possible shorter packet at the end? Or is it divided > into n packets, each holding 188 bytes (which would be much less > efficient)? it fully utilizes 512 bytes every time. > >> We are only using asynchronous transfers for all devices. > > If you can arrange a static image, so that the data bytes don't change > between frames, it would be worthwhile comparing the data you get with > the various transfer schemes. An easy way to do this is to use > Wireshark to capture the transfers. > > Also, it might help if you post a sample of the code you are testing. > Maybe something will stand out. > I give up on that now. I pointed out how to get the device work with USBFS, and that there's something you and all the others don't seem to know about bulk USB transfers (me including when it comes to that part). Having the device driver is one thing, the other one would be to add the softwaredemodulation part for certain protocols (which is closed source and comes from a 3rd party) and application part, the linux driver is stealing too much time for nothing. I'm going back to OSX and might retry it another time with Linux. I guess the linux and mac stack are somewhat different, in the end both give the same result with same buffersizes (too small buffers and too big buffers corrupt the mpeg stream). Our driver works with Mac, Solaris, BSD and could work with Linux if those castrated bulk boundaries would be fixed). anyway thanks for trying. Markus -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html