Re: new.net (was: Root Server DDoS Attack: What The Media Did Not Tell You)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > well, it's not clear that it works well for .com.  try measuring
> > delay and reliability of queries for a large number of samples
> > sometime, and also cache effectiveness.
> 
> I guess the burden of proof is on those who argue that it doesn _not_
> work well.

The burden of proof is on those who want to change the status quo.

FWIW, I'm doing these experiments myself, and will publish the
results when I'm done in such a way that others should be able
to repeat the experiments, compare their results with mine,
and form their own conclusions. 

Of course whether DNS currently works "well" is subjective.  But 
there's a tendency to think of it as working well simply because 
we are accustomed to that level of service.

> > let's put it another way.  under the current organization if .com breaks
> > the other TLDs will still work.   if we break the root, everything fails.
> 
> Since .com was running _on_ the root-servers.net until recently
> without problems, what are we talking about?
> 
> Naturally there won't be 1 million TLDs all at once. We could start
> with a couple of hundreds. That would merely double the size of the
> root.

It's not just the size of the root that matters - the distribution
of usage (and thus locality of reference) also matters.  

The point is that if removing constraints on the root causes problems 
(and there are reasons to believe that it will) we can't easily go back
to the way things were before.

Keith


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]