On 08/18/2009 01:14 PM, Doug Gorley wrote:
I just stumbled across this table in a database developed by a collegue: field_name | next_value | lock ------------+-------------+-------- id_alert | 500010 | FREE id_page | 500087 | FREE id_group | 500021 | FREE These "id_" fields correspond to the primary keys on their respective tables. Instead of making them of type serial, they are of bigints with a NOT NULL constraint, and the sequence numbers are being managed by the application (not the database.) I googled around a bit trying to find an argument either in favour of or against this approach, but didn't find much. I can't see the advantage to this approach over using native PostgreSQL sequences, and it seems that there are plenty of disadvantages (extra database queries to find the next sequence number for one, and a locking mechanism that doesn't play well with multiuser updates for two.) Can anyone comment on this? Has anyone ever had to apply a pattern like this when native sequences weren't sufficient? If so, what was the justification?
One justification I can see is if there would otherwise be an unmanageably large number of individual sequences. I have an app in which there is a table containing "things" that have a type code. There can be an arbitrary number of type codes and in practice may be many dozens. Each "thing" also has a user-visible id number which users normally assign sequentially within each type. The app currently creates a sequence for each type and uses them to provide a default values for the id numbers. I am considering changing this to something like you describe. In my case there is a low insert rate so contention (which I read is the biggest problem with this approach) should not be an issue. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general