Re: Is Fragmentation at IP layer even needed ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Feb 9, 2016 at 5:14 AM Phillip Hallam-Baker <phill@xxxxxxxxxxxxxxx> wrote:
On Tue, Feb 9, 2016 at 7:06 AM, Tony Finch <dot@xxxxxxxx> wrote:
> Phillip Hallam-Baker <phill@xxxxxxxxxxxxxxx> wrote:
>>
>> Maybe what we needed all along was a better TCP that allowed data to
>> be sent on the first packet.
>>
>> That is what people keep seeming to re-invent.
>>
>> Another of those cases where people keep telling me that there are
>> good reasons not to do that but don't ever get round to explaining
>> what they are.
>
> http://roland.grc.nasa.gov/tcp-impl/list/archive/1292.html
>
> But I thought TCP fast open https://tools.ietf.org/html/rfc7413
> fixed the design errors in T/TCP, so is it still considered a bad idea?

Perhaps it does. But as I said, it only counts as solved when it is in
the stacks I can use. And no, Linux doesn't count.


Yup.

<rant>
There seems to be a fair amount of discussion requiring knowledge of the host stack, or understanding of the capabilities of a specific network (e.g: all the hosts support [reassembly of "large" fragments | TCP fast open], all the routers in my network support looking deep into EH, all my devices set flow labels, etc.). 

This feels deeply flawed to me - applications shouldn't need to have deep knowledge of the network or end system stack behavior, and relying on specific behavior of a system / network makes the application brittle and non-portable[0]. 

Until a behavior is supported by the lowest common denominator / (almost) everything, it probably makes sense to avoid it[1]. 

As an example (which I'll use because it's well known / simple, not because it is still applicable), reassembly of >1500 byte packets: 
"An upper-layer protocol or application that depends on IPv6
fragmentation to send packets larger than the MTU of a path should
not send packets larger than 1500 octets unless it has assurance that
the destination is capable of reassembling packets of that larger
size."

If I'm writing a general purpose application (e.g a gaming app), how on earth do I know if the destination is capable of reassembling >1500 octets?
I could:
A: blindly assume it does
B: limit myself to platform that do
C: I can probe

A is a poor option until I be very sure that everything I (conceivably) want to talk to supports it, and will continue to for for the foreseeable future.

B is a poor option for obvious reasons - even on something like a phone I'd like to be able to port this to one of the other ecosystems with minimal work.

C is a poor option because I need to add significant complexity - either I have a negotiation / capabilities exchange at session startup, or I probe in mid-session. I can probe in parallel, or try it and wait for failure (which is also tricky - was the packet dropped *because* it is >1500 bytes, or was it random congestion?)

I'm just trying to write a game - I'm not a network weenie. With limited resources, it makes sense to just use the well defined, well known, universally supported services, not rely on something that only works on most of the devices, most of the time.
Unless the behavior provides *significant* benefit, I'm likely to avoid it.

Yes, this sucks. It means that innovation slows down - apps avoid non-ubiquitous features, which means that vendors / stack have less incentive to build / deploy them. Waving the protocol bible and saying "but the spec says..." doesn't really change the incentives of reasonable people optimizing for their own (selfish) reasons.

</rant>

W
[0]: This is changing with "apps" - e.g mobile apps.E.g:  If you write an iOS app, you know it will run on iOS.
[1]: Unless it provides compelling benefits, you have spare development cycles, or you have a pet project / protocol.

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]