Re: What ASN.1 got right

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ASN.1 always had data types, and then XML came along, which had no data types but  a pretty good system for associating names with chunks of data, and successfully invaded most of ASN.1's space. At which point I concluded that it's more important to know something's name than its data type.

On Mon, Mar 1, 2021 at 6:18 PM Nico Williams <nico@xxxxxxxxxxxxxxxx> wrote:
On Mon, Mar 01, 2021 at 05:34:55PM -0800, Larry Masinter wrote:
> JSON-LD seems to fit modern needs from an extensibility / simplicity point
> of view.

I know nothing about JSON-LD.

> All the bit-packing goodness of various encodings are dreadful from an
> interoperability point of view.
> Rich formalisms and separation of syntax and "encoding rules" seem
> counter-productive.

The nicest thing about XML is XSLT/XPath, and the nicest thing about
JSON is jq.  Such languages are probably only feasible when you have
loose typing, which XML and JSON do.  And loose typing does arguably
mean dispensing with the formalisms that force static typing.

That said, and as much as I love jq, for all the protocols I work on I
would much rather have static typing and rich formalisms.  Especially in
security protocols, I'd rather have rich formalisms.

As always, one should use the right tool for the job.  (FWIW, I used to
maintain jq and might again, and I maintain an ASN.1 implementation.)

I don't see how separation of syntax and encoding can be counter-
productive: alternative syntaxes are always possible, and transcoding is
generally possible, and people often have to do these things for some
reason.

As to "bit-packing"...  have you noticed that every textual encoding
eventually evolves a binary adaptation?  XML has FastInfoSet.  JSON has
a multitude of binary encodings (at least three).  Parsing textual
encodings isn't easy, and much less parsing them efficiently.  Parsing
dynamically typed data requires more overhead than parsing statically
typed data.

Parsing JSON efficiently is really hard.  Parsing anything without a
schema shifts a lot of burden onto the developer, unless the developer
is using something like jq.  People have devoted a lot of effort to
using SIMD to parse JSON more quickly than can be done on a traditional
CPU, but IIUC there are no online JSON parsers that use SIMD -- no one
would bother doing this for XDR because there is no need.

XDR was always simpler to compile or hand-roll codecs for than TLV
encodings, and definitely than textual encodings.  I've never heard a
bad thing uttered about XDR.

It turns out that once you've parsed the syntax into an AST it's pretty
trivial to generate codecs (possibly bytecoded) regardless of the
encoding rules' nature.

XDR being so much simpler than TLV types because of the absence of tags
and wasteful lengths, was always easy to handle, but what made it easy
was not having any crutches and so having to have parsed the syntax
defining the types one needed to encode.  There's a lot of hand-rolled
XDR out there as well, including some I look after, because it's much
easier to hand-roll XDR than TLV encodings, and certainly than textual
ones.

NDR's pointer dedup feature made it much harder to implement, but
otherwise it's really similar to XDR.  OER and PER are not too
dissimilar to XDR, so they're probably comparable to XDR in
implementaiton complexity.

Nico
--


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux