Hello,
default_statistics_target = 50 # pgtune wizard 2012-06-06 maintenance_work_mem = 1GB # pgtune wizard 2012-06-06 constraint_exclusion = on # pgtune wizard 2012-06-06 checkpoint_completion_target = 0.9 # pgtune wizard 2012-06-06 effective_cache_size = 48GB # pgtune wizard 2012-06-06
I am benchmarking different ways of putting data into table on table creation:
parallel --block 128M --jobs 4 --pipe psql -c "COPY c FROM STDIN";
(the parallel command is available as part of parallel deb package in Ubuntu for example, it splits the stdin by newline character and feeds it to the corresponding command)
Both tables a and b have ~16M records and one of the columns in a is geometry (ranging from several KB in size to several MB). Columns in b are mostly integers.
The machine that I am running these commands on has the following parameters:
work_mem = 80MB # pgtune wizard 2012-06-06
wal_buffers = 8MB # pgtune wizard 2012-06-06
checkpoint_segments = 16 # pgtune wizard 2012-06-06
shared_buffers = 16GB # pgtune wizard 2012-06-06
max_connections = 400 # pgtune wizard 2012-06-06
One would expect the 3rd option to be faster than 1 and 2, however 2 outperforms both by a large margin (sometimes x2). This is especially surprising taking into account that COPY doesn't acquire a global lock on the table, only a RowExclusiveLock
So is option 2 a winner by design? Could you please suggest other alternatives to try (if there are any)? And what might be the reason that 3 is not outperforming the other 2?
Thank you,
Igor