Search Postgresql Archives

Re: Equivalent for AUTOINCREMENT?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michelle Konzack wrote:
Hello Michael,

Am 2008-10-31 11:15:54, schrieb Michael Hall:
I have a table where I have a serialnumber which shuld be  increased  be
each INSERT.  I know I can use max() to get the highest number, but  how
can I use it in a INSERT statement?
Have a look in the manual for the SERIAL data type.
For fields with a SERIAL data type, you can use DEFAULT in an INSERT statement.

OK, the SERIAL is now working, but there is ONE problem.

If the INSERT fails, the SERIAL is in any case increased...

Yes, that's by design. If that didn't happen, then it would be impossible for another transaction to obtain a value from the sequence before the previous transaction had committed or rolled back, even if it had to do lots of other unrelated work before it committed.

If you really, truly need gapless sequences, there are some options. I posted about them recently on another thread. The archives will contain that post and many others from many people on the same topic. Be aware, though, that gapless sequences have some NASTY performance consequences.

So this was the reason, why I have tried to use max(serno)  and  add  +1
the get the next value for "serno".

Is there a solution for it?

Design your application not to expect your primary keys to be gapless. If it requires contiguous sequences for something, generate them at query time instead of storing them as primary keys. If the contiguous sequence numbers must also be stable over the life of the record, try to redesign to avoid that requirement if at all possible.

Since I can use a transaction block if  required,  there  should  be  no
problem with concurence access.

If you do not mind your transactions being unable to concurrently access the data, there are many options available for gapless sequences. If you have to handle DELETEs then you'll need a trigger to renumber everything, but assuming you don't, something like this should work:

CREATE TABLE id_counter ( last_used INTEGER NOT NULL );
INSERT INTO id_counter ( last_used ) VALUES ( -1 );


-- Now, within a transaction:
--
-- Update the id counter. This obtains a lock that'll cause other
-- transactions to block until this transaction commits or rolls back
-- if they also attempt to update the counter. Only one transaction
-- (that touches id_counter) at a time may be doing work from this
-- point on.
--
-- You could use an explicit LOCK TABLE, but it's kind of pointless.
--
UPDATE id_counter SET last_used = last_used + 1;

--
-- Use our newly obtained ID, which is guaranteed to be free and
-- unused by any concurrent transaction.
--
INSERT INTO sometable ( id, blah ) VALUES ( (SELECT last_used FROM id_counter), 'blah');

-- You can now do your other work and commit. Do as little as
-- possible after this, though, because you're preventing any
-- other transactions that need to access id_counter from doing
-- anything.

--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux