Iljitsch van Beijnum writes: > I guess not because I have no idea what you're talking about. There is a natural tendency to think that by dividing a 128-bit address field into two 64-bit fields, the address space is cut in half (or perhaps not diminished at all). However, in reality, dividing the field in this way may reduce the address space by a factor of as much as nineteen orders of magnitude. Again and again, engineers make this mistake, and render large parts of an address space unusable through careless, bit-wise allocation of addresses in advance. For example, if you assign addresses sequentially, you can address up to 2^128 hosts with a 128-bit address. If you divide the address field into two 64-bit fields--one for the country and one for those within the country, say--intuitively you might think that you've only slightly reduced the total number of available addresses. In fact, however, you've dramatically diminished the size of the address space. Unless you have exactly 2^64 countries in the world, each of which contains exactly 2^64 hosts, very large segments of the address space will be wasted. The effective address space may be many trillions of times smaller than you expect. And you may find yourself exhausting that address space far more quickly than you ever thought possible. When you divided an address field into bit-wise fields, you encode information into the address. This introduces redundancy, which reduces the usable address space. If you allocate a country code to one country in the example above, for instance, and that country has only 1000 computers, you've just wasted roughly 18,446,744,073,700,000,000 machine addresses. Do this for multiple countries, and you could easily waste almost all your address space simply due to this extremely foolish division of the address field. It's unlikely you'll have 2^64 countries to accommodate; and it's equally unlikely that each of these countries will have exactly 2^64 hosts (no more, no less) to address, so you are wasting many bits of the address field. And the wasted space increases exponentially with the number of wasted bits. Wasting ten bits instead of five bits doesn't waste twice the space, it wastes _32 times_ the space. I've often felt that the most common mistake made by engineers in designing systems, especially computer systems, is underestimation of capacity, and wastage of existing capacity. Dividing address fields into preallocated zones is the most common example of this. They just never learn, and they cannot be told. They are always convinced that there's more space than anyone will ever need in their address field, and by the time they find out otherwise, it's too late. It happened with IPv4, 16-bit addresses, and 32-bit addresses; it will soon happen with 64-bit addresses and with IPv6. I don't understand how engineers can be so stupid in this one specific domain.