Search Postgresql Archives

Re: Could postgres12 support millions of sequences? (like 10 million)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Mar 20, 2020 at 10:26 AM Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 3/20/20 9:59 AM, Adrian Klaver wrote:
> On 3/19/20 10:31 PM, pabloa98 wrote:
>> I see.
>>
>> Any suggestion? It should behave like a sequence in the sense that
>> concurrent transitions will get different numbers from this
>> alternative sequence like solution.
>>
>> In our case, we will need to do a call nextval('some_seq') (or
>> similar) from different processes no more than twice every minute.
>>
>>
>> It would be nice to have a sequence data type. Correct me if I am
>> wrong, but It seems to me that a sequence data type would cost the
>> same or less than the current sequence implementation.
>>
>> The schema would be more clear too. We could have a table like:
>>
>> CREATE TABLE pair(
>> group INT NOT NULL,
>> element INT NOT NULL,
>> seq SEQUENCE INCREMENT 1
>>                START 1
>>                CACHE 1
>>                MINVALUE 1
>>                MAXVALUE 99999999
>>          NOT NULL,
>> CONSTRAINT PRIMARY KEY (group, element)
>> );
>>
>> And then:
>>
>> INSERT INTO event(group, element, code)
>> VALUES (
>>           1,
>>           1,
>>           ( SELECT seq.nextval('code_1_1_seq') FROM pair p WHERE
>> p.group=1 and p.code=1 )
>> );
>>
>> Or perhaps storing all the sequences in the same table as rows will
>> have the same behavior.
>
> If code is just something to show the sequence of insertion for group,
> element combinations then maybe something like below:
>
> CREATE TABLE event(
> group_id INT NOT NULL, --changed as group is reserved word
> element INT NOT NULL,
> insert_ts timestamptz NOT NULL DEFAULT clock_timestamp(),
> PRIMARY KEY(group_id, element, insert_ts)
> );
>
>
> insert into event(group_id, element) VALUES
> (1, 1),
> (1, 1),
> (1, 1),
> (2, 1),
> (1, 1),
> (1, 3),
> (1, 1),
> (1, 3),
> (2, 1),
> (2, 1);
>
>
> select * from event ;
>   group_id | element |           insert_ts
> ----------+---------+--------------------------------
>          1 |       1 | 03/20/2020 09:51:12.675926 PDT
>          1 |       1 | 03/20/2020 09:51:12.675985 PDT
>          1 |       1 | 03/20/2020 09:51:12.675991 PDT
>          2 |       1 | 03/20/2020 09:51:12.675996 PDT
>          1 |       1 | 03/20/2020 09:51:12.676 PDT
>          1 |       3 | 03/20/2020 09:51:12.676004 PDT
>          1 |       1 | 03/20/2020 09:51:12.676008 PDT
>          1 |       3 | 03/20/2020 09:51:12.676012 PDT
>          2 |       1 | 03/20/2020 09:51:12.676016 PDT
>          2 |       1 | 03/20/2020 09:51:12.67602 PDT
> (10 rows)
>
>
> select group_id, element, row_number() OVER (partition by (group_id,
> element) order by (group_id, element)) AS code from event;
>   group_id | element | code
> ----------+---------+------
>          1 |       1 |    1
>          1 |       1 |    2
>          1 |       1 |    3
>          1 |       1 |    4
>          1 |       1 |    5
>          1 |       3 |    1
>          1 |       3 |    2
>          2 |       1 |    1
>          2 |       1 |    2
>          2 |       1 |    3
> (10 rows)

Oops the above actually returned:

select group_id, element, row_number() OVER (partition by (group_id,
element) order by (group_id, element)) AS code, insert_ts from event;
  group_id | element | code |           insert_ts
----------+---------+------+--------------------------------
         1 |       1 |    1 | 03/20/2020 09:51:12.675926 PDT
         1 |       1 |    2 | 03/20/2020 09:51:12.675985 PDT
         1 |       1 |    3 | 03/20/2020 09:51:12.675991 PDT
         1 |       1 |    4 | 03/20/2020 09:51:12.676008 PDT
         1 |       1 |    5 | 03/20/2020 09:51:12.676 PDT
         1 |       3 |    1 | 03/20/2020 09:51:12.676004 PDT
         1 |       3 |    2 | 03/20/2020 09:51:12.676012 PDT
         2 |       1 |    1 | 03/20/2020 09:51:12.67602 PDT
         2 |       1 |    2 | 03/20/2020 09:51:12.676016 PDT
         2 |       1 |    3 | 03/20/2020 09:51:12.675996 PDT
(10 rows)


Needs to be:

select group_id, element, row_number() OVER (partition by (group_id,
element) order by (group_id, element, insert_ts)) AS code, insert_ts
from event;
  group_id | element | code |           insert_ts
----------+---------+------+--------------------------------
         1 |       1 |    1 | 03/20/2020 09:51:12.675926 PDT
         1 |       1 |    2 | 03/20/2020 09:51:12.675985 PDT
         1 |       1 |    3 | 03/20/2020 09:51:12.675991 PDT
         1 |       1 |    4 | 03/20/2020 09:51:12.676 PDT
         1 |       1 |    5 | 03/20/2020 09:51:12.676008 PDT
         1 |       3 |    1 | 03/20/2020 09:51:12.676004 PDT
         1 |       3 |    2 | 03/20/2020 09:51:12.676012 PDT
         2 |       1 |    1 | 03/20/2020 09:51:12.675996 PDT
         2 |       1 |    2 | 03/20/2020 09:51:12.676016 PDT
         2 |       1 |    3 | 03/20/2020 09:51:12.67602 PDT
(10 rows)



Clever. :D
I will use it on other things.

The problem for this specific case is that if someone deletes a row, several codes will change. For this problem, codes should not be changed.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux