Re: [Last-Call] [DNSOP] Working Group Last Call for Revised IANA Considerations for DNSSEC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Vladimir, 

Thanks for the feedback. Please see my responses inline.  


On Wed, Sep 15, 2021 at 1:45 PM Vladimír Čunát <vladimir.cunat+ietf@xxxxxx> wrote:
On 15/09/2021 16.41, Daniel Migault wrote:
> Outside experimentation, especially for national algorithms, this will
> lead to nations having their algorithms qualified as standard while
> other nations having their algorithms qualified as non standard. I
> would like to understand why this cannot be a problem.

I'm sorry, I'm a bit confused about which nations would get standard
algorithms.  Are P-256 and P-384 considered "national" crypto?  I know
they're from NIST, but they seem widely popular outside USA. 
Technically we have old GOST algo(s) on standards track, though they are
already obsolete in its nation, so those? Or some other (planned)
algorithm I've missed?  Apart from that, I personally think that
allowing "cheaper" allocations of algorithm numbers *reduces* this
disparity/problem instead of making it worse, but perhaps I'm missing
the essence of the issue.

The reason I am mentioning the national algorithm is that these motivated the support for lowering the barrier for a code point allocation at least during the call for adoption. I was wondering if such nations would find it acceptable to have their algorithm as non standard. I do not have any specific example in mind and as far as I know GOST is standard [1] - This was already the case during the call for adoption and I suppose it was mentioned as an example. 
I am pretty sure similar cases will show up in the future, so we should try, as you mention, to reduce these disparities. If that is not seen as an issue, that is good. In any case the issue there was non technical.  

[1] https://datatracker.ietf.org/doc/draft-ietf-dnsop-rfc5933-bis/

 
Interoperability could be mentioned for reference, though in practice
having a standard does not necessarily help that much, e.g. Ed25519
validation levels are still rather low after four years with standard
and Ed448 is probably even worse:
https://www.potaroo.net/ispcol/2021-06/eddi.html

What I meant is that when a code point is adopted as a standard, there is a commitment from most resolver developers to support it. Interoperability is then provided when all resolvers are updated and it takes some time (software life cycle management) for the system to provide interoperability. 
I agree with you that standard does not mean implemented by the resolver but mostly because a standard algorithm may also be deprecated, and this is why it seems to me useful to mention RFC8624 that defines the algorithms that need to be implemented. In particular it would be good to know whether non standard algorithms could be mandated by RFC8624 updates. If that is not the case, as far as I understand it, non-standards are likely to never be interoperable.    
Ed* adoption shows that this takes more than 4 years to be deployed, but its deployment status remains consistent with RFC8624. 

As a result, if the software life cycle management time is long for a standard algorithm, it is likely to take even longer to provide interoperability (which I think is worth mentioning).  I also believe it is helpful to have RFC8624 being referenced as which algorithm to implement and deploy as opposed to relying on the status of the code point - which is I think your point. 

--Vladimir



--
Daniel Migault
Ericsson
-- 
last-call mailing list
last-call@xxxxxxxx
https://www.ietf.org/mailman/listinfo/last-call

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux