Re: Performance of a large array access by position (tested version 9.1.3)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/06/12 09:02, Maxim Boguk wrote:
Hi all,

May be I completely wrong but I always assumed that the access speed to the array element in PostgreSQL should be close to constant time.
But in tests I found that access speed degrade as O(N) of array size.

Test case (performed on large not busy server with 1GB work_mem to ensure I working with memory only):

WITH
t AS (SELECT ARRAY(SELECT * FROM generate_series(1,N)) AS _array)
SELECT count((SELECT _array[i] FROM t)) FROM generate_series(1,10000) as g(i);

Results for N between 1 and 10.000.000 (used locally connected psql with \timing):

N:          Time:
1           5.8ms
10          5.8ms
100         5.8ms
1000        6.7ms
--until there all reasonable
5k         21ms
10k        34ms
50k       177ms
100k      321ms
500k     4100ms
1M       8100ms
2M      22000ms
5M      61000ms
10M    220000ms = 22ms to sinlge array element access.


Is that behaviour is correct?

PS: what I actually lookin for - constant fast access by position tuplestore for use with recursive queries and/or pl/pgsql, but without using C programming.

Default column storage is to "compress it, and store in TOAST" with large values.
This it what is causing the shift. Try to change the column storage of the column
to EXTERNAL instead and rerun the test.

ALTER TABLE <tablename> ALTER COLUMN <column name> SET STORAGE EXTERNAL

Default is EXTENDED which runs compression on it, which again makes it hard to
position into without reading and decompressing everything.

http://www.postgresql.org/docs/9.1/static/sql-altertable.html

Let us know what you get.?

Jesper


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux