dalroi@xxxxxxxxxxxxxxxxxxxxxxxxxxxx (Alban Hertroys) writes: > On 6 Jan 2011, at 17:51, Chris Browne wrote: > >> wmoran@xxxxxxxxxxxxxxxxx (Bill Moran) writes: >> If your system is sufficiently negligently designed that this particular >> conflict causes it to kill people, then I wouldn't be too inclined to >> point at this issue with UUIDs being the Real Problem with the system. >> >> This is NOT the only risk that the system faces; you can't get *nearly* >> as low probabilities attached to hardware and network issues such as: >> - Disks failing >> - Cosmic rays twiddling bits in memory >> - Network connections failing part way through the work >> - Dumb techs blindly cloning the same "host key" onto every one of the >> EMTs' data collection devices > > Let's say that you actually build a mission critical system for which > you'd need to evacuate the country if it fails. You pick the best ECC > RAM you can find, the most reliable type of disk storage available, > your fallback network has a fallback network of it's own, > etc. Basically you have done everything you could to ensure that the > chances of the system failing are as small as technically possible. > > All those little failure chances add up to a certain number. Using > UUID's for your ID's is not required for the design of the system, yet > you chose to do so. You added a nearly infinite chance of UUID > collisions to the accumulated chance of the system failing. Infinite? The probability can't conceivably exceed 1. It's scarcely likely to exceed "infinitesimal." I've built clustered systems, and frequently, the resulting Rube Goldberg apparatus that tries to protect against failures of the other apparatus trying to protect against failures of further apparatus that tries to protect against failures introduces a tall and unwieldy stack of intricately interwoven components such that operators need to be *mighty* careful not to tip anything over lest the protective apparatus collapse, knocking over the system it was supposed to protect. > Now the system miraculously fails and the country needs evacuating. A > committee is going to investigate why it failed. If the dumb techy > above is responsible, they just found themselves a scape-goat. If they > didn't, but stumble upon your unnecessary usage of UUID's > instead... Let's just say I don't want to be that person. If the system is that mission critical, then it well and truly warrants doing enough proper analysis of the risks to *know* the risks of the various expectable failure conditions, and to do so in rather more detail than the oversimplification of characterizing them as "infinitesimal" or "infinite." > I have to agree with Bill here, if lives depend on your system then > anything that adds to the failure chances is very hard to defend. In > the end it often boils down to responsibility in case of failure, not > to mention what it does to your own peace of mind. It seems to me that using serially assigned values, along with manually assigned server IDs, to construct a would-be-unique value, is likely to introduce quite a lot *more risk* of system failure than would the use of UUIDs. So someone that rules out UUIDs based on some fallacious imagined "infinite chance of collisions" is jumping away from a small risk, and accepting one much more likely to take lives. We haven't seen any indication that would distinguish between "infinite" and "infinitesimal," beyond the fact that "infinite" is infinitely larger than the largest probability that one can find for an event, which is 1. -- (format nil "~S@~S" "cbbrowne" "gmail.com") "But life wasn't yes-no, on-off. Life was shades of gray, and rainbows not in the order of the spectrum." -- L. E. Modesitt, Jr., _Adiamante_ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general