Merlin Moncure wrote:
If your data modeler that made the the assumptions that a MAC is unique (a mistake obviously) at least the other tables are protected from violations of that assumption because the database would reject them with an error, which is a Good Thing. Without a uniqueness constraint you now have ambiguous data which is a Very Bad Thing.
With a uniqueness constraint in this situation, the unexpected data--row with a non unique MAC--will be rejected and possibly lost when the insertion happens. You say that's a good thing, plenty of people will say that's the worst possible thing that can happen. When dealing with external data, it's often impossible to know everything you're going to see later at design time. Approaching that problem with the idea that you're going to lose any data that doesn't fit into the original model is not what everyone finds reasonable behavior.
I don't think it's possible to decide in a generic way which of these is the better approach: to reject unexpected data and force the problem back at the application immediately (commit failure), or to accept with with because you're using a surrogate key and discover the problems down the line. Both are valid approaches with a very different type of risk associated with them. I think it's fair to say that real-world data is not always well known enough at design time to follow the idea you're suggesting though, and that does factor into why there is such a preference for surrogate keys in the industry.
-- Greg Smith 2ndQuadrant US greg@xxxxxxxxxxxxxxx Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general