Re: [Cose] WG Review: CBOR Object Signing and Encryption (cose)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27 May 2015, at 13:53, Phillip Hallam-Baker wrote:

The CBOR data model is a superset of the JSON data model, so a trivial
translation of JOSE to CBOR is indeed trivial.

It is a rather complex superset.

It takes less code to parse than JSON, however, which is ironic. For those that haven't thought about the problem, consider this bit of valid JSON:

{ "one": 1e400, "on\u0065": "\udead" }

1e400 is bigger than most implementations can deal with. Are "one" and "on\u0065" the same thing? (yes) If so, what do you do about the duplicate key? (implementation-defined, non-interoperable) What do you do about the half surrogate pair \udead, since JSON is defined as an odd subset of UTF-16 that allows invalid 16-bit code units? (usually: generate an invalid codepoint and hope for the best)

The added complexity of CBOR yields interesting properties, such as removing some of the ambiguities and edge cases of JSON that hinder interoperability. That said, most of the things that are in the superset (other than binary blobs, unambiguous UTF-8 strings, and interoperable integers) are unlikely to be used in a COSE spec. Some CBOR implementations already have flags to turn off some of those features, such as unambiguous IEEE754 floating point numbers in order to reduce code size.

Doing such a trivial mapping would be completely misguided, though, as
CBOR has additional capabilities, and the efficiencies we need in the
constrained node network environment are indeed made possible by those
additional capabilities [1].

I note that you still haven't answered my challenge on data encoding
efficiency. My JSON-C encoding shows what can be achieved using the JSON data model with only the addition of a binary blob data type. I posted the
encoding for the JOSE signing example shortly after Dallas.

I've just gone back and looked at draft-hallambaker-jsonbcd-02 again. I agree with you that JSON-[BCD] do a better job at segregating functionality that might be needed by different applications than CBOR does, at the expense of some implementation complexity and complexity on the part of the protocol designer picking which level to use.

I disagree that there is a difference in the degree to which JSON-C differs from JSON than CBOR does. If you compare Table 1 and Table 2 of your doc to Section 2.1 of RFC 7049, for example, there's a lot of similarity. The ASN.1 OID references in your Section 5 would need to be both motivated better and better described before they would be interoperably implemented.

If you wanted to do the work to turn this doc into one or more specifications that were ready to publish as RFCs, with more complete explanations and examples, then wanted to define a JOSE mapping into that set of specifications, I would fight for your right to do so (although probably wouldn't provide much technical help beyond reviewing for completeness).

However, I don't see any likely possibility that we could talk existing JSON implementations (such as those in browsers) into implementing these formats natively, so we're looking at completely new code for both parsing and generation. At that point, retaining any kind of compatibility with JSON doesn't provide a win, and retains all of JSONs interoperability issues.

What your constrained node example needs is a better encoding for JSON, not
the introduction of yet another encoding for a random data model.

The goal is a substantially similar data model. Potentially even identical, depending upon which warts of JOSE the working group would have consensus to fix.

My encoding is generated entirely mechanically. Switching from a text
encoding to binary means simply choosing a different encoding method. Using compressed tags is slightly more complicated in that a dictionary has to be
compiled, but that is at worst a registration effort.

Really not that different from CBOR.

I repeat that no Working Group effort is required or desired for applying
the encoding to JOSE, ACME or any current or future JSON spec.

Some relatively-large amount of work would be necessary to get your JSON-[BCD] spec ready for publication as one or more RFCs. If you have a group of people who have the energy for that work, then I think you should do so. If you then have a group of people who have the energy to do a spec (however minimal in size) for a JOSE encoding, you should do that, too. I would suggest that you get some buy-in from folks that are in the community you want to implement those specs before you start.

In the meantime, your potential desire to do other work shouldn't keep the COSE work from starting, in my opinion.

--
Joe Hildebrand





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]