Hi, On Tue, Dec 30, 2014 at 09:40:45AM -0500, Jorge Ramirez-Ortiz wrote: > On 12/29/2014 02:58 PM, Felipe Balbi wrote: > > Hi, > > > > On Mon, Dec 29, 2014 at 02:56:25PM -0500, Jorge Ramirez-Ortiz wrote: > >> On 12/29/2014 11:37 AM, Felipe Balbi wrote: > >>> Hi, > >>> > >>> On Sun, Dec 28, 2014 at 07:28:33PM -0500, Jorge Ramirez-Ortiz wrote: > >>>> On 12/28/2014 11:39 AM, Felipe Balbi wrote: > >>>>> On Sat, Dec 27, 2014 at 05:33:36PM -0500, Jorge Ramirez-Ortiz wrote: > >>>>>> Hi Ricardo/all > >>>>>> > >>>>>> I finally got around to capture a trace of a SS bulk transfer using the net2280. > >>>>>> The trace is available to anyone interested (70 MB file for the Beaglebone 5000). > >>>>> can you publish the trace somewhere we can download ? I use Beagle5000 > >>>>> myself and could help revieweing the traces. > >>>>> > >>>> Thanks Felipe. > >>>> I dont have any public ftp server at hand so I just pushed the log to my Xenomai > >>>> git tree. > >>>> > >>>> please grab it from here (no need to clone the tree, just press the download link) > >>>> http://git.xenomai.org/xenomai-jro.git/commit/?h=logs > >>> One of the reasons could be because you're using the printer gadget :-) > >>> Have you tried any of the other gadgets ? In any case, try this little > >>> hack: > >>> > >>> diff --git a/drivers/usb/gadget/legacy/printer.c b/drivers/usb/gadget/legacy/printer.c > >>> index 9054598..8a09661 100644 > >>> --- a/drivers/usb/gadget/legacy/printer.c > >>> +++ b/drivers/usb/gadget/legacy/printer.c > >>> @@ -129,7 +129,7 @@ module_param(qlen, uint, S_IRUGO|S_IWUSR); > >>> > >>> /* holds our biggest descriptor */ > >>> #define USB_DESC_BUFSIZE 256 > >>> -#define USB_BUFSIZE 8192 > >>> +#define USB_BUFSIZE 65536 > >>> > >>> static struct usb_device_descriptor device_desc = { > >>> .bLength = sizeof device_desc, > >> yes I thought about that as well: in fact the trace I made available already had > >> that modification in place. > >> (it also has the queue length increased to 20 - no more changes after those two) > > oh, ok. That would mean the IP can't pump more data :-s > > > >> for simplicity -and to have a common test vehicle- I'll capture a trace using > >> the g_mass_storage (last time I run that check the performance numbers were > >> exactly the same though). > > > I pushed a new trace using the g_mass_storage driver this time. > http://git.xenomai.org/xenomai-jro.git/commit/?h=logs > > LUP/LDN sequences seem to occur at a higher frequency than before (every 16 > transfers or earlier) > > the test plan as described in the commit: > ------------------------------------------ > device: > modprobe net2280 > dd if=/dev/zero of=/tmp/disk bs=256M count=1 > modprobe g_mass_storage file=/tmp/disk > host: > dd if=/dev/sd{x} of=/dev/null > > the transfer speed is pretty much the same than before (around 100MB/s) I was going around the previous trace again and the duration of most of the 1024 bytes transfers is rather high (over 7us). I found a few which are ridiculously high (102.032us on index 3973 or somewhere around that). I wonder what is causing these extra slow packet. Is it SW or HW ? ps: the average through calculated by the sniffer with the previous trace is 112.71 MB/sec. The average packet duration with that is 9.085 us. Considering most are above 7 us and some are ridiculously slow, I guess things really do add up. Can you try to use the kernel function profiler to see if there's anything that takes a lot more time than the others ? cheers -- balbi
Attachment:
signature.asc
Description: Digital signature