On Wed, Jan 5, 2011 at 1:03 PM, Bill Moran <wmoran@xxxxxxxxxxxxxxxxx> wrote: > > But the point (that you are trying to sidestep) is that the UUID namespace > is finite, so therefore you WILL hit a problem with conflicts at some point. > Just because that point is larger than most people have to concern themselves > with isn't an invalidation. The UUID itself is 128 bits. Some of those bits are pre-determined. I don't recall, but I think that a "normal" UUID has 121 bits of randomness. How many would one have to store in a database before a collision would even be a concern. Such a database would be freaking huge. Probably far larger than anything that anyone has. Lets say (I'm pulling numbers out of my ass here), that you wanted to store 2^100 rows in a table. Each row would have a UUID and some other meaningful data. Maybe a short string or something. I don't recall what the postgresql row overhead is (~20 bytes?), but lets say that each row in your magic table of death required 64 bytes. A table with 2^100 rows would require nearly 10^31 bytes ( = log_10(64 * 2^100)). How on Earth would you store that much data? And why would you ever need to? I postulate that UUID collisions in Postgresql, using a "good" source for UUID generation, is unlikely to have collisions for any reasonable database. Food for thought: http://blogs.sun.com/dcb/entry/zfs_boils_the_ocean_consumes ps- If my math is off, I apologize. Its been a long day... -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general