Re: [Last-Call] EAT profiles (was Re: Iotdir last call review of draft-ietf-rats-eat-13)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Focusing in here on CBOR encoding:

On Jun 7, 2022, at 5:32 PM, Carsten Bormann <cabo@xxxxxxx> wrote:

On 8. Jun 2022, at 01:26, Laurence Lundblade <lgl@xxxxxxxxxxxxxxxxx> wrote:

CBOR — RFC 8949 clearly allows for both indefinite and definite encoding.

Indeed (after s/encoding/length encoding/).

If one implementation chooses one and another the other, there will not be interoperability.

FTFY:
If one implementation inexplicably chooses only to accept one of them, there will not be interoperability with generators that generate the other.

Don’t do that, unless there is a very specific reason not to.
(One specific reason may be where you need deterministic encoding — there a decision has been made to only allow definite length encoding.)

Generic implementations accept both length encodings.
I can’t imagine an implementation that can handle the complexity of EAT but not the complexity of both length encodings.

I can. The decoding of indefinite length strings requires a memory allocator or some strategy to coalesce string chunks. The COSE payload, which must be hashed, could be many independent string chunks. COSE header parameter values could be lots of string chunks. However you do the coalescing or processing, it is going to be more code and it is something people may not always implement. My t_cose implementation of COSE doesn’t support indefinite length strings yet.


This subject is normally not visible to CBOR users because CBOR implementations simply implement both.

It’s true that many CBOR implementations are able to decode both definite and indefinite length encodings. That may or may not translate to all receivers of EAT, CWT and COSE being able to decode both because the decoder APIs for indefinite are sometimes different from definite and for the need of things like string chunk coalescing.

However, this is about hard specification in the EAT document, not about what’s commonly implemented. If we want the EAT standard to give 100% guaranteed interoperability, the EAT document would have to be 100% specific about requirements for use of definite and indefinite lengths.

Do we want to do that in EAT?


On Jun 7, 2022, at 10:14 PM, Eliot Lear <lear@xxxxxxx> wrote:

Hi Laurence,

Every point you made below is something that could have been addressed, if necessary, using the MUSTs and SHOULDs Toerless discussed.  As I wrote earlier, the most absurd case of this is whether a nonce is an array or a single object.  You have several easy choices to avoid that: either say that it can be both, or simply require an array.  Thomas and I have gone back and forth about the length of the nonce.  Was there any discussion or objection to simply limiting the size of the nonce, as described in the PSA profile?


We certainly could say, "all decoders of EAT MUST support definite and indefinite length encoding” and get 100% interoperability on this issue. Or we can say “all EAT encoders MUST produce only definite length encoding”.

None of the CBOR-based protocols I know of (CWT, COSE, CoSWID, SenML) specify this. It is not the practice today. The specification of these protocols thus does not 100% guarantee interoperability. As Toeless says, "As you say, This is not specific to this draft, but IMHO applies to all similar cases of profiles, and yes, the IETF has not been particularily good about it"

Should EAT be the first?

I don’t think so for the reasons that these two encoding options were created in the first place.

If we require decoders to support both, then we handicap decoders on small devices. If we require encoders to support only definite length, we handicap encoders on small devices. Attestation is definitely something we want to support on constrained devices.


That said, on the CBOR definite/indefinite length issue I might be happy to go along with a consensus for “MUST…”, but not on crypto algorithms.  For example, I do not think this EAT draft should say “a receiver MUST implement all of ES256, ES384 and ES512 and the sender MUST restrict themselves to only these algorithms.” I think this would cripple the security of the EAT standard long term because it can’t adapt as algorithms evolve. We shouldn't make this with a MUST in the EAT draft for the same reason COSE and CWT didn’t.

We could address much of this with a MUSTs in an IETF standardized EAT profile. I’ve thought about starting work on one. MUST is OK in an EAT profile because there can be more than one over time as crypto changes and such.

LL


-- 
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux