Re: Pre-picking one solution (Re: [ietf-dkim] Re: WG Review: Domain Keys Identified Mail) (dkim)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Keith Moore wrote:
> > OTOH, the assumption that _all_ public keys used to validate DKIM 
> > signatures will be stored in DNS is a very limiting one, because it 
> > appears to lead to either
> > 
> > a) a constraint that policy be specified only on a per-domain basis 
> > (which is far too coarse for many domains) or
> 
> Actually, the DKIM base spec does provide a mechanism for replacing the
> DNS keystore with something else.  Look at 1.4 for a general statement,
> and the description of the "q=" tag in 3.5.  DKIM's intended to be able
> to support user-level keys in a future version (there's some discussion
> of that in appendix A), and its design is set up specifically not to
> prevent that.
> 
> The proposed charter puts the details of other key management systems
> and user-level keys out of scope so that we can contain the work at this
> stage, and make quick progress on the first version.  It'd be entirely
> reasonable to recharter and attack these issues immediately after
> completing the first round of chartered work, if there are enough people
> who want to work on that.  Or we can see how a while of deployment goes,
> and form another WG in a year or so to do it.

I disagree.  The first standard version of DKIM needs to be something
that is broadly applicable, not something that just handles a few
corner cases.  The amount of stress associated with getting closure and
consensus on a document is sufficiently large (independent of the
document scope) that it doesn't seem like a good idea to waste all of
that effort producing a first document that is of limited applicability,
and that will need to be updated soon - particularly when a lot of the
division within the group stems from the current DKIM spec's
overconstraining the problem.   

If your goal is gaining consensus on a useful specification in the
shortest amount of time, it makes far more sense to work on the
different aspects of the problem in parallel rather than serially.  
Let one subgroup work on per-domain keying, key management, and
policies; another subgroup work on per-user keying, key management, and
policies; another subgroup work on message canonicalization; another
subgroup work on specifying signature and hash algorithms (and managing
the transition from weaker to stronger algorithms as these are
discovered); and finally, have a coordination team responsible for
making everything fit together and managing the framework (definitions,
header field names,  parameter names, keywords) that all of these
pieces fit into.  Give each subgroup a year, with deliverables at 4
month intervals.  After a year, expect to do working group last call on
all pieces and start polishing the drafts for final publication.  If
any piece proves to be unworkable, it can be thrown out after a year.
But even if that piece needs to be replaced from scratch and this
causes a delay that's better than imposing a serial dependency a
priori.  The unworkable piece could as easily be per-domain keying as
per-user keying.

I believe this approach will produce a consensus far more quickly than
the serial approach, and that the resulting standard will be more
broadly applicable and more robust.

Keith

_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]