Re: [Last-Call] [Rats] [Iot-directorate] Iotdir last call review of draft-ietf-rats-eat-13

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Eliot,

On Sun, Jun 5, 2022 at 4:25 PM Eliot Lear <lear@xxxxxxx> wrote:
> Maybe what is not immediately clear is that EAT is not a complete
> protocol but a framework.
>
> What is provided is a specification of a token.  That's how I reviewed it.

I had a very similar conversation with Carsten (and Henk) a while ago.
He had a similar reaction to yours.
Maybe two clues are not yet a proof, but they start looking more than
a coincidence to me :-)

> The EAT framework provides:
> * A type system -- the base claims-set & a few aggregation types;
> * Security envelopes based on COSE, JOSE;
> * CBOR and JSON serialisations;
> * A number of pre-defined semantics (the defined "claims") that one
> can readily reuse.
>
> All good.  In fact, that's so well stated that perhaps you should say it in the draft just so.

WFM, though this is for the document editors to decide.

> So, a mechanism to identify specific kinds of EAT-based PDUs needs to
> be there from the onset, otherwise one wouldn't know how to
> instantiate the framework for their use case.  And that's precisely
> the role of the profiles.
>
> I'd suggest that Section 7 is still problematic as specified.  Let's start with Section 7.2.1:
>
>    The profile should indicate whether the token format should be CBOR,
>    JSON, both or even some other encoding.  If some other encoding, a
>    specification for how the CDDL described here is serialized in that
>    encoding is necessary.
>
>    This should be addressed for the top-level token and for any nested
>    tokens.  For example, a profile might require all nested tokens to be
>    of the same encoding of the top level token.
>
> Can you give an example of when this would not be entirely clear from context?

(If I understand correctly your question) the problem the text is
hinting at would arise in presence of "Nested-Token"s (see [1] and
subsections), which may be in a different serialisation than the outer
EAT.  A profile indication (either in the media type [2], or in a
top-level profile claim) would make the decoding context entirely
clear to the receiver.

[1] https://datatracker.ietf.org/doc/html/draft-ietf-rats-eat#section-4.2.19.1.2
[2] https://www.ietf.org/archive/id/draft-lundblade-rats-eat-media-type-00.html

>    In some cases CDDL may be created that replaces CDDL in this or other
>    document to express some profile requirements.
>
> Not only is this counter-cultural, but it would require an Updates:
> header on any such profile, and would further just be plain out
> confusing.
>
> I don't think the "Updates" would be required: the CDDL defines a type
> constraint that is applicable to the specific profile, it doesn't
> modify the base type.
>
> But that is precisely what the text I quoted states.

Yeah, having re-read that bit with fresh eyes, I think you're right.
The text can be made more precise in terms of what kind of "CDDL
rewriting" is allowed.

> As an aside, I think I should congratulate you for actually generating compliant SVG graphics!

Thanks! I am just an old ASCII artist trying to make a living in this
new world of scalable vectors :-)

> Coming more to the point, why is it the working group could not settle on many of the contents inside that profile document?  This profile seems like an out for the working group not having resolved some differences.  Are there those who want nonce values other than 32, 48, or 64 bytes?  If so, what brings about the difference and can it be resolved?

The 32/48/64 restriction comes from the PSA Attestation API [3].  It's
a "local" constraint that applies to the nonce type due to certain
decisions taken by the PSA API designers, it's not universal.

[3] https://git.trustedfirmware.org/TF-M/trusted-firmware-m.git/tree/interface/include/psa/initial_attestation.h#n36

> Also, some of the contents of the profile you refer to demonstrate the peril:  a nonce can be presented in three different ways.  Why?  Why does it matter that you not use an array when conveying a single nonce?  All that does is add additional branches.  Worse, if parsing has to occur based on multiple profiles, as will happen, the amount of code needed to do this is likely to balloon.

In our implementation (on the verifier side) the way it composes is
the following: We have a generic EAT library that deals with EAT
claims individually; creating the PSA profile consists of grabbing the
three claims we reuse from the EAT library, defining our own
(implementation-id, security-lifecycle & co.) and putting them
together into one "PSA object."   When a PSA token comes in (normally
identified by its media type), the first thing we do is call the CBOR
decoder to try and roughly map the binary blob to the layout of our
PSA object.  Then we call a validation method that goes through all
the claims and applies the type constraints for the EAT stuff as well
as ours.  There is no extra branching in the EAT library, the type
specialisation is dealt with in the profile specific code.

Hope this helps make the topic a bit clearer.

Cheers, and thanks again for the great discussion.

-- 
Thomas

-- 
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux