Re: 128 bits should be enough for everyone, was:

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Austin Schutz writes:

> But this has been known all along. It's a feature, not a bug.

Yeah, right.

> If we "throw away" the last 64 bits we are left with 2**64 addresses,
> which is obviously what was intended from the beginning.

And when you allocate bit fields in the remaining 64 bits, you exhaust
that as well.

> The current v4 /0 has lasted for some time now, it's difficult to
> envision a time where we would be burning through space so fast that
> /32s in v6 space wouldn't last months, if not years.

The inability to envision things is the root of the problem.

> But for argument's sake, let's say a /32 lasts one day at some point
> in the dim future. This gives us 2 ** 32 days before we run out. not
> too bad for those of us not realistically having much of a chance to
> live beyond 2 ** 15 or so.

More bogus math.  Every time someone tries to compute capacity, he
looks at the address space in terms of powers of two.  Every time
someone tries to allocate address space, he looks as the address space
in terms of a string of bits.  Since the space is allocated as bit
fields, it is exhausted in linear time, even though capacity
projections are based on an exponential space.  For this reason, the
address space always comes up too short, too soon.  The real mystery
is that this seems to surprise people who should know better.

> If we "throw away" additional bits for other engineering purposes
> the number of days would be 2 ** (32 - wasted bits). If we waste a full
> half of those bits bits we're down to 2 ** 16 days (about 180 years).

Yes, I know how these calculations work.  See above.

> Again, that assumes we'd burn what is equivalent to a v4 /0
> every single day for 180 years.

It doesn't matter how fast we "burn" a v4 /0, because the space is
exhausted by encoding information into the address and allocating bit
spans in the address, not by actually handing out addresses.

> Routing doesn't (and will never) work that way. Much like with
> airlines, the cost of a path is more complex than mere distance.

The cost of a processor will always be high and it will always fill
three seven-foot-high cabinets, therefore all computers will always be
timesharing systems. There's no need to imagine desktop computers;
computing doesn't (and will never) work that way.  We should prepare
for the future of dumb terminals, which will eventually be everywhere
(in every home and office).

Do you see the problem with making predictions about the future?

> That would be an alternative, certainly. I'm not sure how excited
> to get about a 1 byte payload needing 1000 bytes of header, but I'm sure
> it's possible.

It would only need a thousand bytes if it were being routed to the
other side of the universe.

> Is it worth throwing away the current post-v4 solution?

Throwing away a few years' work for something that makes virtually no
assumptions about the future?  Yes, it might well be worth it.  It
would have been worth it if it had been done that way in the first
place, too.



_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]