On Wed, Mar 24, 2021 at 5:37 PM Michael Thomas <mike@xxxxxxxx> wrote:
IPsec certainly suffered this fate, though with filtering I'm not sure if it would have the right security properties for tunnel mode. Certainly had we used transport mode IPsec instead of SSL we wouldn't be coming back 25 years later worried about the TCP checksum.
Mike
My research into Internet-PHB has reached the point where I am having to write my own presentation layer because HTTP/HTTPS isn't giving me what I need
Alice (@alice) is sending a message to a host belonging to here Mesh Service Provider (@provider).
Alice and the provider both have public key credentials. But they are not PKIX credentials or anything remotely similar. So TLS Client Auth really isn't a help. And there are in any case two distinct layers of credentials:
1) Device/.Host credentials
2) Principal credentials
In the Mesh, every device and every host has its own unique credentials that are bound to those of the principal that issued the credentials.
We do a kimono type protocol when negotiating a connection. First we establish a key exchange between the devices and this is constructed so that the service only learns the client device and client principal credentials after we have a secure channel.
While working on this scheme I have come to the following conclusions:
1) Do packet fragmentation at the application layer, this enables super-packets of up to 256 Ethernet frames.
2) Use AEAD (OCB prefered over GCM) to provide a 128 bit integrity check on the super-packet.
3) Packet level integrity checks only need to cover the IP header because their purpose is to allow diagnostics and debugging of lower layer issues. Application layers MUST NOT rely on the IP packet or lower integrity checks.
Packet 'fragmentation' at the application layer is a win because we can tune for the network conditions and do retry at the fragment (i.e. IP packet) pevel.
Consider the case of 'streaming' video, this is not really 'streaming'. Rather we have a sequence of related work units. For the sake of argument, consider the case where we pull in a complete frame then decompress, then present it [yes, I know that the CODECs don't enforce this]. We receive the data, decrypt, check the tag and if it is OK, decompress and present to the user.
If we are in a live video conference situation and we hit an error, we probably just scrub that frame and move on to the next. If we have to scrub too many, we tell the sender to reduce the streaming rate
If we are doing a live studio broadcast, then the output is always a few frames behind live and we will need to be able to recover from an error by identifying which packet(s) have errors and ask for them to be resent.
Slow start in this context means 'make the first frame you send be a thumbnail at your lowest resolution, then adapt'.