[Last-Call] Tsvart last call review of draft-ietf-tsvwg-multipath-dccp-17

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reviewer: Kyle Rose
Review result: Not Ready

This document has been reviewed as part of the transport area review team's
ongoing effort to review key IETF documents. These comments were written
primarily for the transport area directors, but are copied to the document's
authors and WG to allow them to address any issues raised and also to the IETF
discussion list for information.

I have additional comments following the TSVART review content, addressed to
the Security ADs and to the IESG as a whole.

When done at the time of IETF Last Call, the authors should consider this
review as part of the last-call comments they receive. Please always CC
tsv-art@xxxxxxxx if you reply to or forward this review.

The summary of this review is Not Ready. **I am recommending early attention be
paid to this document.**

# Transport comments

* Attempts to precisely describe the design and behavior of multipath protocols
would benefit from unambiguous standardized definitions for terms such as
"path" that reflect the reality of deployment on the public internet. For
instance, this document, echoing MP-TCP's RFC 8684, defines "path" as "A
sequence of links between a sender and a receiver, defined in this context by a
4-tuple of source and destination address/port pairs". This definition is
internally inconsistent: in general, a sequence of links from source to
destination is not uniquely identified by a single source and destination
address pair. The network determines most of the path, and indeed the precise
sequence can and often does change without notice. The endpoints typically have
control over the nearest hop through the choice of interface, and that's it. I
know we all know this, but in published documents we should strive to use
terminology that does not require the reader to fill in the blanks from
experience.

* Why does `MP_CONFIRM` need any more context than the sequence number of the
packet containing the control message being confirmed?

* Do you *really* intend for the parsing of `MP_ADDADDR` to be entirely a
function of the field length? The implied requirement for unique parseability
limits future extensibility more than you might imagine now.

* I don't understand how this protocol retains synchronization between sender
and receiver under lossy conditions. How is a sender to distinguish between a
dropped `MP_JOIN` and a dropped `MP_CONFIRM` in response to that join?
Presumably the mitigation is that the sender doesn't use the new path in either
case, but: Does it retry? Should it time out after a while and try to recover
the possibly lost address ID? A clear, systematic analysis of behavior in the
presence of loss would be helpful in understanding the behavior of a protocol
designed to leverage unreliability as a feature.

* "In theory, an infinite number of subflows can be created within an MP-DCCP
connection, as there is no element in the protocol that represents a
restriction" does not appear to be accurate: the 8-bit field limit for address
ID in `MP_JOIN` and `MP_ADDADDR` seems to be a pretty tight limit of 256
simultaneous subflows.

* The analysis of sequence number space sufficiency seems to be off by a few
orders of magnitude from the unimplemented recommendation of RFC 7323:
considering the current globally-routable max packet size of 1500 bytes, a
48-bit packet index is closer to a 59-bit octet index than to a 64-bit one.
Nonetheless, 48 bits seems sufficient, as at 1500 bytes per packet this
corresponds to roughly 375 petabytes, which seems like a large enough window
for the foreseeable future.

* If you intend `MP_PRIO` to be reliably exchanged, use a normative "MUST be
acknowledged via `MP_CONFIRM`" rather than saying it is "assumed to be
exchanged reliably".

# Security comments

These comments are primarily directed to the Security ADs.

The security model described in this document is what I can most charitably
describe as "incoherent". While the threat model isn't clearly articulated
anywhere in the draft, I'm guessing the goal is to protect MP-DCCP control
messages from injection or man-in-the-middle manipulation... but it doesn't
appear to actually do that. For all the trouble the protocol goes through to
perform ECDHE key exchange, the resulting integrity protections seem to miss
the things that actually matter. For instance, the document states that:

> the HMAC "Message" for MP_JOIN...is a concatenation of....The nonces of the
> MP_JOIN messages for which authentication shall be performed.

So if I'm reading the spec correctly, the HMAC protects the random nonce, but
notably *not* the critical control plane signaling, such as the connection
identifier.

Similar fundamental misapplication of cryptographic integrity measures pervades
the document. For instance, `MP_ADDADDR` and `MP_REMOVEADDR` each include a
truncated hash of an HMAC generated during the initial handshake: this is
trivially replayable by any passive observer.

I would add that there are also address linkability problems inherent to a
plaintext multipath protocol passing around static identifiers. While I myself
don't think the public internet should---or really even can---provide the kind
of privacy enabled by overlay networks like ToR, I appear to be in the rough on
this for the time being so I feel like I should at least raise it to the ADs.
That said, this might not be a real-world concern, as the last section of my
review will make plain.

# What are we doing here?

These comments are directed to the IESG as a whole.

What is the purpose of this work? As a fellow Akamai employee [noted in
2021](https://www.akamai.com/blog/security/threat-advisory-dccp-for-ddos):

> While attempting to identify real world use cases, we were unable to find a
> single application that actually utilizes the protocol. This includes source
> code searches against GitHub for SOCK_DCCP declarations, a programmatic
> constant used when setting up a Socket in source code of multiple supported
> languages.

Standards development work is extremely costly in both time and attention. If
there is no specific cross-organizational real-world use case for which
standardization would be of benefit to interoperability of systems deployed to
the public internet, this work belongs in a research lab, not at a standards
development conference.

So I ask again: what is the purpose of this work? Who's intending to deploy a
system atop this protocol, what end-user functionality is it going to enable,
and what experiments are implementors performing to help guide iteration on
protocol design so we get something that works under real network conditions?
If we can't answer these questions, we should stop spending time on it here.
That's not to say the work itself has no value, only that until and unless it
gets to the point where interoperability of real-world systems is a bottleneck
it probably doesn't belong at the IETF.



-- 
last-call mailing list -- last-call@xxxxxxxx
To unsubscribe send an email to last-call-leave@xxxxxxxx




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux