Re: Accurate history [Re: "professional" in an IETF context]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eduardo,
On 03-Nov-21 21:31, Vasilenko Eduard wrote:
Then why Address Resolution Protocol was needed in principle? (ND or whatever)
If the L3 address always had an L2 address inside?

It was always clear that using the MAC address as the interface identifier was specified for each LAN technology separately. If you recall the discussions a couple of years ago, I was arguing for /64 being clearly defined as a *parameter* of the address architecture which we have currently set at 64. In the very early days it was set at 48. Even in RFC1884, the use of a MAC address as the IID was *explicitly* stated to be an example (section 2.4.1).

ND is therefore an unavoidable step, especially on a LAN with no router.


The OSI was calling for layers isolation. It was a big deal for OSI.
It is not the isolation when addresses from different layers are inserted into each other.


It has been described as a serious bug in the OSI model, by none less than John Day, but this is indeed what OSI does. The OSI Reference Model says you can create layer N address by concatenating an (N-1)-address on the end of an (N)-identifier. But that isn't the IPv6 model. A layer 3 address is a layer 3 identifier concatenated to a layer 3 routeable prefix. And today we have largely discarded the pragma that the layer 3 identifier is constructed from the layer 2 address.


Brian explained how it did happen: this mistake was copied from another protocol.
IMHO: it is not a good excuse.


No, I didn't say it was a mistake or offer an excuse. I just tried to provide objective facts.

The consequences are forever with us:
1. Filters from all vendors are 4x affected between IPv6 and IPv4. 2x was because of this decision.> 2. Table sizes are 2x affected. And maybe affected 4x in the future if the IPv6 address would be expanded from 64 to 128.
3. SPRING is looking at how to "Compress SIDs". Such a label stack stresses PFE scalability 2x for not a good reason.

Wasting 2x more bits for addresses is not "free lunch".
We could waste more energy and squeeze more gates into the ASIC to overcome the scalability problem created.
IPv6 is anti-green technology as a result.
If it is needed to process more bits - it is the cost for nature.


On the timescale concerned, Moore's law has given us a factor of more than 1000. I agree that there's an energy cost to everything, but most energy wastage comes from idiocies like Bitcoin and unnecessarily high quality video, not from FIB lookups.


By the way, IPv6 is close to "64-bit addressing technology" now. In reality, it is a little bigger because bits are not wasted for IDs on the subnet.
All these calculations of addresses per every atom on the Earth are wrong. Real address space is 2^64 smaller.


Sure. The important calculation is how many prefixes are available. The atoms in the universe calculation is just silly.

The current practice to "waste as much IPv6 as possible for every design" would fire back in a few decades.


If you're arguing for flexibility in the /64 boundary, yes, I would still like to see it clearly defined as a parameter in the architecture that we might vary in some contexts (e.g. IoT). If you're arguing against using address bits for higher-layer semantics, I fully agree with you. But if you look around, you'll find a lot of people who think that's a great idea. (As long as they confine it to their own limited domains, however, it doesn't really matter that much.)

Or a big revamp would be needed to expand from 64 to 128 bits. CIDR again.


Yes, there are some warning signs in BGP4 growth for IPv6, but due to /48s. In the end, it's BGP4 scaling that limits everything.
https://dx.doi.org/10.1145/3477482.3477490

   Brian



Eduard
-----Original Message-----
From: ietf [mailto:ietf-bounces@xxxxxxxx] On Behalf Of Brian E Carpenter
Sent: Tuesday, November 2, 2021 11:08 PM
To: ietf@xxxxxxxx
Subject: Accurate history [Re: "professional" in an IETF context]

On 03-Nov-21 05:45, Vasilenko Eduard wrote:
+1.
It was a big mistake to break OSI model and include L2 address (MAC) inside L3 address (IPv6).

I think you are not familiar with the CLNP NSAPA GOSIP addressing model. As RFC1526 clearly explains, the CLNP address architecture proposed for the Internet embodied an ID field that could be an IEEE MAC address (see section 3.1 in particular). That's how DECnet/OSI worked, too. (And Novell Netware, copied from Xerox PUP, a.k.a. XNS.)

We didn't break the OSI model, we copied it.

Half of the address bits were wasted.

No, we *doubled* the address size to accomodate the ID field. Most people at the time expected a 64 bit address. I believe that it was the 20 byte GOSIP address that made the 16 byte IPv6 address thinkable.

     Brian

Many people are coming and coming asking that they would like too to get some bits of IPv6 address for some new protocol.> Eduard





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux