Search Postgresql Archives

Re: Could postgres12 support millions of sequences? (like 10 million)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/21/20 11:08 AM, pabloa98 wrote:

     > As to below that is going to require more thought.
     >
    Still no word on the actual requirement. As someone who believes
    consecutive numbers on digital invoices is simply a mistaken
    interpretation of the paper based system, I suspect a similar error
    here. But again we haven’t really heard, far as I know. Something
    really fishy about 99999999.
     >

It is fishy. This is the thing. The code, element, and group is part of a bigger code called item identifier (or ii).

An item identifier is a result of:  code || group || element || qualifier (a constant like 55) || check digit coming from some check algorithm.

For example:

for a triplet (group, element, code) like (1, 3, 63) the item identifier (or ii) is: 630010003558  (the last 8 is the check digit). This number is converted to a bigint and stored (and used as PK or FK on other tables, etc, etc).

In an item identifier the room is assigned like:

Revised, to make it match above identifier:


> 8 digits for code (not padded with 0s)
3 digits for group
4 digits for element
2 digits for qualifier
1 digit for the check digit.
-----------------------------
18 digits for item identifier.



So the question may actually be:

How do we improve our locking code, so we don't have to spawn millions of sequences?

What is the locking method you are using?


The lock part is because we solved a similar problem with a counter by row locking the counter and increasing it in another part of the database. The result is that all the queries using that table are queued by pair (group, element) that is not that bad because we are not inserting thousands of rows by second. Still is killing cluster performance (but performance is still OK from the business point of view). The problem using locks is that they are too sensitive to developer errors and bugs. Sometimes connected clients aborts and the connection is returned to the pool with the lock active until the connection is closed or someone unlocks the row. I would prefer to have something more resilient to developers/programming errors, if possible.






--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux