On Sun, Sep 23, 2018 at 1:15 PM Kim Rose Carlsen <krc@xxxxxxxx> wrote:
Hi
I have some simple INSERT / UPDATE queries, that takes a long time the first time they are run in out test environment, but I'm not sure what postgres is doing and what I can do to help it. Whats common is that the table contains many rows in the order of about 20 millions.
Query:
INSERT INTO communication.request_parameter (request_id, template_version_parameter_id, parameter_value)
VALUES (1222, 1211, 122) RETURNING request_parameter_id
Description of table:Row from pg_stat_statements:
-----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+- userid | dbid | queryid | query | calls | total_time | min_time | max_time | mean_time | stddev_time | rows | shared_blk | shared_blk | shared_blk | shared_blk | local_blks | local_blks | local_blks | local_blks | temp_blks_ | temp_blks_ | blk_read_t | blk_write_ | -----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+- 16385 | 16389 | 2064198912 | INSERT INTO communication.request_parameter (request_id, tem | 98 | 646.393451 | 0.036666 | 638.712758 | 6.59585154081633 | 64.1818799227704 | 98 | 2850 | 24 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
# \d communication.request_parameter
Table "communication.request_parameter"
Column | Type | Collation | Nullable | Default
-------------------------------+-------------------+-----------+----------+-------------------------------------------------------------------------------
request_parameter_id | integer | | not null | nextval('communication.request_parameter_request_parameter_id_seq'::regclass)
request_id | integer | | not null |
template_version_parameter_id | integer | | not null |
parameter_value | character varying | | |
Indexes:
"request_parameter_pkey" PRIMARY KEY, btree (request_parameter_id)
"request_parameter_parameter_value_idx" btree (parameter_value)
"request_parameter_request_id_idx" btree (request_id)
"request_parameter_template_version_parameter_id_idx" btree (template_version_parameter_id)
Foreign-key constraints:
"request_parameter_request_id_fkey" FOREIGN KEY (request_id) REFERENCES communication.request(request_id)
"request_parameter_template_version_parameter_id_fkey" FOREIGN KEY (template_version_parameter_id) REFERENCES communication.template_version_parameter(template_version_parameter_id)
This only happens in testing, and on a cold bootet database. The test database is constructed with pg_dump and restore on fresh postgres installation.
Sounds like warming up the cache, but still in a test environment you may want to add auto_explain to your list of preloads and perhaps set it to dump explain analyze when it hits a certain threshold. Note that while dumping the query plans has very little overhead, timing the query plan nodes does impact performance in a negative way.
Best Regards
Kim Carlsen
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor lock-in.