django_user wrote:
How can stop postgresql from incrementing the primary key value, so that even after many failed insert statements it get the next id val.
,,,
so wouldnt I run out of ids one day, if there are lot of failed insert statements, lets say for every successful insert there are 50 unsuccessful inserts, so ids would be 1, 50, 100, and once I have thousands of rows, I will run out of IDs ? should I use bigserial instead ?
Gordon wrote:
In theory, yes. but the standard 4 byte integer can represent about 2 billion positive numbers so even with a lot of failed inserts you're probably not going to run out for years.
First of all, sequenced keys are an artificial (surrogate) key. They should carry no meaning whatsoever to the data model or business logic. If they do carry meaning, then your code should manage the values rather than using an automatic sequence.
Also, if you (after careful review of your analysis) still have restrictions that are not part of the auto-generated sequence mechanism, such as leaving no gaps in the sequence, you should implement your own sequence instead of wishing the automatic mechanism were different from what it is.
Chances are good that the auto-genned sequence will work for you, if you think about it. If not, would you share why you anticipate that gaps will cause trouble for you?
-- Lew ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/