Hi Artem, On 12/11/2010 11:42 PM, Artem Leonenko wrote: > On 12/11/2010 06:14 AM, pkondeti@xxxxxxxxxxxxxx wrote: >>> >>> There is one nasty scenario when CI13xxx driver fails: >>> a) two or more rx requests are queued (g_ether does that) >>> b) rx request completed, interrupt fires and ci13xxx dequeues rq >>> c) request complete() callback gets called and in turn it calls >>> ep_queue() >>> c1) in ep_queue() request gets added to the TAIL of the rx queue list >>> d) ep gets primed with rq from (b) >>> e) interrupt fires >>> f) request gets popped from queue head for hw dequeue >>> G) requets from queue head wasn't enqueued >>> g1) isr_tr_complete_low() doesn't >>> enqueue more requests and it doesn't prime EP, >>> rx traffic stalls >>> >>> Solution: >>> a) enque queued requests ASAP, i.e. before calling complete() callback. >>> b) don't HW enqueue and prime endpoint with recently added request and >>> use the oldest request in the queue. >>> >> Given that you are not priming the endpoint when the list has already >> requests >> before queuing the current requests (b), you need not have to do (a). > > Do you suggest to wait until complete() callback finishes? The earlier > we will prime EP the faster data will be transmitted. > I was saying that (b) is sufficient to fix the problem. I agree that priming the endpoint before calling complete gives faster data rates. >> >>> Fixed issues: >>> a) ep_queue() may return an error code despite request was successfully >>> added to the queue (if _hardware_enqueue() fails) >>> b) Added requests are always processed in LIFO order, even if they are >>> added in complete() callback >>> c) Finally more than two and more queued requests are processed >>> consistently, >>> even if they were added in complete() callback >>> >> (b) and (c) are same. > > Nope. In isr_tr_complete_low() we are popping RQ from the queue head, > checking its status field and oops - that request wasn't enqueued > for transfer (sure thing, we primed ep with RQ added to the tail of > the queue and tail != head if the queue has more than > 2 elements). Here I made an assumption that requests are queued in > complete() callback. > > To fix (a) you can just add dumb work around and check return value > for -EBUSY and would recover all flow for working with 10 queued > requests. Unfortunately that wont work. See previous paragraph. So > (a) and (c) aren't the same issues > I was telling (b) and (c) are same. >> >>> And as a part of the fix to make g_ether (and especially booting with >>> NFS root) I had >>> to increase NET_IP_ALIGN to mips32 natural value (4). >> >> I am not sure about the fix. But you might want to do this in a separate >> patch. >> > > Relaxing requirements on alignment will subtly increase memory usage but > in the same will allow ci13xxx driver + g_ether to support more > platforms. The ci13xxx driver supports rather big family of IP cores > so the more we can support the better. > > Can you try to test g_ether with your ci13xxx core with my patches? The > ci13xxx IP cores have many different flavors/features depending on > SoC/core instantiation. I wonder whether your core works with > default alignment or not. > Sure. I will test it on monday. But am sure that it will work without alignment change. Because g_ether is working fine on MSM with another controller driver (not ci13xxx_udc). Thanks, Pavan -- Sent by a consultant of the Qualcomm Innovation Center, Inc. The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum. -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html