Search Postgresql Archives

Re: Define hash partition for certain column values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, I think I 've described my case not precisely enough.
 
"Randomly" is not pure random in my case.
 
My solution is planning to be used on different servers with different DBs. The initial data of the base table depends on the DB. But I know that the key value of the new rows is increasing. Not monotonously, but still.
I need a common solution for all DBs.
 
The size of a base table could be very different (from millions to hundreds of billions). For tests I've used 2 different dumps.
 
Ranges that were suitable for the first dump show for the second the situation like I've described (2-3 partitions with 95% of data). And vice versa.
Besides, permanent increasing of key value of new rows means that some ranges will be permanently increasing 
meanwhile others will have the same amount of data or even less (outdated data is clearing).
 
Hash partitioning shows that I will have partitions with not exactly the same size of data but similar enough. And this result is actual for both dumps.
 
So that I've decided to use hash partitioning.
 
Thank you,
Iana Golubeva


12.01.2021, 19:41, "Michael Lewis" <mlewis@xxxxxxxxxxx>:
On Tue, Jan 12, 2021 at 9:37 AM Alban Hertroys <haramrae@xxxxxxxxx> wrote:

> On 12 Jan 2021, at 16:51, Голубева Яна <ishsha@xxxxxxxxx> wrote:
>
> Values for the key partitioning column are generated randomly and I can't predict their distribution between ranges.
> If I just create some ranges I won't have any guarantee that partitions will have similar amount of data. It is possible that I will have 2 or 3 extremely big partitions and a bit of data in others.

A hash of a random number is also random, so when using hashes for partitioning you will get the same problem.

If you want to distribute values equally over a fixed number of partitions, I suggest you partition on a modulo of a monotonously increasing number (a sequence for example), instead of relying on a random number.

> 12.01.2021, 17:55, "Michael Lewis" <mlewis@xxxxxxxxxxx>:
> On Tue, Jan 12, 2021 at 1:21 AM Голубева Яна <ishsha@xxxxxxxxx> wrote:
> List or range partitioning isn't suitable for my case.
> I am using a column of numeric(20) type as a base for partitioning. The values of the column are generated randomly.
> So there will be too many partitions if I use list partitioning as is.
>
> Sorry, but why is range not suited for this? It would seem fairly trivial to create 50 or 1000 partitions to break up the range of values allowed by your field definition.

Alban Hertroys

That said, there is no reason you should need near-perfectly-even distribution anyway. You can also split partitions later, or do another level of partitioning on large partitions if they somehow end up significantly unbalanced.

How many rows are we talking about initially/over time? Do you plan to drop old data at all? Perhaps the initial decision to partition was decided on a bit too hastily.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux