On Thu, Sep 14, 2017 at 07:11:19PM +0200, Rafal Pietrak wrote: > > As I said, I'm not looking for performance or "fair probability" of > planetary-wide uniqueness. > > My main objective is the "guarantee". Which I've tried to indicate > referring to "future UPDATEs". > > What I mean here is functionality similar to "primary key", or "unique > constraint". Whenever somebody (application, like faulty application > IMPORTANT!) tries to INSERT or UPDATE a not unique value there (which in > fact could possibly be generated earlier by UUID algorithms, or even a > sequence), if that value is among table that uses that (misterious) > "global primary key"; that application just fails the transaction like > any other "not unique" constraint failing. > > That's the goal. > > that's why I have an impression, that I'm going into entirely wrong > direction here. Hi Rafal, How many tables do you need to support a unique key across? My approach to problems like this is to to provide constraints that will allow normal DB functions to provide these assurances. For example, give each table its on serial generator as a primary key, but make the sequences disjoint. There are 9,592 prime numbers less than 100,000. Give each table one of those as the increment and within that table you will never hit the sequence value generated for a second table. This will at least allow you to prevent any table from ever using the value for another table. Obviously, this may not fit your use case, but it provides another way to attack the problem. Good luck. Regards, Ken -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general