iljitsch@xxxxxxxxx (Iljitsch van Beijnum) writes: > Unicast: A, E, H, L > Anycast: B, C, D, F, G, I, J, K, M (now or planned) > > The thing that worries me is that apparently, there is no policy about > this whatsoever, the root operators each get to decide what they want > to do. The table is round. Policies are discussed as a group but set individually. The result is a service which has never been "down hard", not ever, not for any millisecond out of the last 15 years. This is "strength by diversity." > The fact that .org is run using only two anycast addresses also indicates > that apparently the ICANN doesn't feel the need to throw their weight > around in this area. Apparently you have your facts wrong about how much sway ICANN had about the anycasting of .ORG, but those details aren't mine, let others speak. > Now obviously anycasting is a very useful mechanism to reduce latency > and increase capacity and robustness. However, these properties are > best served by careful design rather than organic growth. Careful design by whom? Organic compared to what? I assure you that f-root has grown by careful design. It's only organic in that we go where we're invited rather than having a gigantic budget that could be used as a leash. Check out <http://www.isc.org/ops/f-root/> and the list of mirror sites, and look for some sponsors you know, and call them, and ask why they sponsored f-root and whether they're happy about it. Then find someone in that region and ask them to do "dig @192.5.5.241 hostname.bind chaos txt" and tell you whether they're talking to a local f-root. What could scale better, or allocate resources more efficiently? (Central planning didn't help USSR.) > If we consider the number of actual servers/server clusters and the > number of root IP addresses as given, there are still many ways to skin > this cat. One would be using 12 unicast servers and anycast just one > address. Who is "we", though? That's always the excluded middle of this debate. I know who I am -- a root operator. But who are "we"? And which of the million different answers to "who are 'we'?" would you like to see govern the choice of "who gets to decide how this stuff ought to work?" E.g., I'm sure, without even having heard it, that you wouldn't want the choice of "who decides?" governed by Dean Anderson's answer to "who are 'we'?" > It seems to me that any design that makes the root addresses seem as > distributed around the net as possible would be optimal, as in this > case the changes of an outage triggering rerouting of a large number of > root addresses is as small as possible. In order to do this, the number > of root addresses that are available within a geographic region (where > "region" < RIR region) should be limited. In counterpoint, it seems to me that any unified design will make the system subject to monoculture attacks or ISO-L9 capture, and that the current system which you call "unplanned and organic" (but which is actually just "diversity by credo") yields a stronger system overall. > (Just having the roots close is of little value: good recursive servers > hone in to the one with the lowest RTT anyway, so having one close by > is enough. However, when this one fails it's important that after the > timeout, the next root address that the recursive server tries is > highly likely to be reachable, in order to avoid stacking timeout upon > timeout. A couple 100 ms extra round trip delay doesn't mean much in > cases where the recursive server suffers a timeout anyway.) What would help overall DNS robustness would be if more DNS clients used recursion, and cached what they heard (both positive and negative). A frightfully large (and growing) segment of the client population always walks from the top down (I guess these are worms or viruses or whatever) and another growing/frightful segment asks the same question hundreds of times a minute and doesn't seem to care whether the response is positive or negative, only that it has to arrive so that the (lockstepped) next (same) query can be sent. If you'd like to unify something, perhaps it could be DNS client behaviour and network-owner recursive caching forwarder design. And while you're at it, please outlaw those fiendish DNS-based load balancers. f-root should still be a 486DX2-66 like it was in ~1995, rather than fifty 1GHz pentiums, and the 500X load 10 years later is due to client stupidity, not population growth or backbone speed increases. -- Paul Vixie _______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf