On Fri, Nov 06, 2009 at 10:35:16AM -0500, Vlad Yasevich wrote: > > > Neil Horman wrote: > > On Mon, Nov 02, 2009 at 08:41:06PM -0500, Neil Horman wrote: > >> On Mon, Nov 02, 2009 at 01:57:42PM -0500, Vlad Yasevich wrote: > >>> > >>> Neil Horman wrote: > >>>> On Mon, Nov 02, 2009 at 12:58:27PM -0500, Vlad Yasevich wrote: > >>>>> Neil Horman wrote: > >>>>>>> We could do that yes, but it concerns me, as assigning the tsn in > >>>>>>>> sctp_outq_flush leaves us in a position where we assign tsn to chunks that might > >>>>>>>> get dropped prior to submission to the ip layer. Consider if we have a routing > >>>>>>>> table disruption, and we follow the no_route path in sctp_packet_transmit. In > >>>>>>>> that situation, we will discard chunks with tsns assigned, leaving a gap in our > >>>>>>>> stream. Unless we have a recovery path for that, I think the better option is > >>>>>>>> to wait to assign tsns until we are sure we can submit them to the ip layer > >>>>>>>> safely (where the transmitted queue can re-tranmit them if need be). If you can > >>>>>>>> explain the SACK case in a little more detail above, perhaps we can come up with > >>>>>>>> some logic to govern when it is and is not safe to call sctp_packet_transmit > >>>>>>>> from sctp_packet_transmit_chunk for data chunks. > >>>>>>> Assume that we have a number of queued chunks that add up to multiple MTUs > >>>>>>> all going to the same transport (typical case). They are currently gated by > >>>>>>> congestion window. > >>>>>>> > >>>>>>> A SACK arrives triggering a flush. With the proposed patch, once we fill a > >>>>>>> single MTU, the main loop in sctp_outq_flush will exist and we will transmit > >>>>>>> only a single packet and under-utilize our congesting window thus preventing > >>>>>>> future growth. With the old code, we had multiple packets sent out thus > >>>>>>> filling the congestion window. > >>>>>>> > >>>>>>> Another thing your patch didn't take into account is that every time we change > >>>>>>> the transport in sctp_outq_flush, we reset the packet, effectively marking it > >>>>>>> empty. You would end up leaking chunks if there was any queuing effects. > >>>>>>> > >>>>>>> If a transient routing problem happens and the packet fails to get sent, that's > >>>>>>> no different then a loss event in the network. It will get reported back as > >>>>>>> gaps or, if the failure is long term, it will be detected with HBs and > >>>>>>> retransmissions. So I don't see a problem of assigning TSNs when the DATA is > >>>>>>> added to the packet. We don't really want to do it any earlier though. > >>>>>>> > >>>>>> Yeah, ok, heres a new version, instead of just skipping the packet transmit in > >>>>>> transmit_chunk, we instead simply assign a tsn in sctp_outq_flush, after we > >>>>>> dequeue a data chunk from the outq and do the normal expiration and invalid > >>>>>> stream checking. > >>>>>> > >>>>>> Regards > >>>>>> Neil > >>>>>> > >>>>> Hi Neil > >>>>> > >>>>> I don't think we can do that in sctp_outq_flush(). > >>>>> > >>>> Why can't we do it in sctp_outq_flush? > >>> Ok, looking at the 'resent' code you left in packet_transmit, this will work, > >>> but we now end up assigning sequence numbers to DATA that may not be transmitted > >>> this time around. > >>> > >>> It will also make FWD-TSNs a bit weird. Worth a test. My personal preference > >>> would be to do it when the chunk is added to the packet. > >>> > >> Ok, very well. I've moved the assignment to the point right after we actually > >> enqueue the chunk to the offered packet. > >> > > > > > > Ping, sorry vlad, not sure where we've left off with this. I've given this some > > testing here, and it works for me. Were there more concerns you had with this > > variant of the patch? > > > > Just running some tests here. It also looks like this was based on the pre .31 > code. > I applied it against the head of your lksctp-dev git tree, is there something newer youd like me to apply it on top of? I can if you like. Neil -- To unsubscribe from this list: send the line "unsubscribe linux-sctp" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html