First off, any thoughts per tuning inserts into large tables. I have a
large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0)
(1 row)
The query cost is low but this is one of the slowest statements per
pgfouine
Possible Causes of slow inserts :
- slow triggers ?
- slow foreign key checks ? (missing index on referenced table ?)
- functional index on a slow function ?
- crummy hardware (5 MB/s RAID cards, etc)
- too many indexes ?
Next we have a select count(*) that also one of the top offenders:
select count(*) from public.tab3 where user_id=31
and state='A'
and amount>0;
QUERY PLAN
-----------------------------------------------------------------------------------------------------
Aggregate (cost=3836.53..3836.54 rows=1 width=0)
-> Index Scan using order_user_indx ontab3 user_id
(cost=0.00..3834.29
rows=897 width=0)
Index Cond: (idx_user_id = 31406948::numeric)
Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))
(4 rows)
We have an index on the user_id but not on the state or amount,
add index to amount ?
Can we see EXPLAIN ANALYZE ?
In this case the ideal index would be multicolumn (user_id, state) or
(user_id,amount) or (user_id,state,amount) but choosing between the 3
depends on your data...
You could do :
SELECT count(*), state, amount>0 FROM public.tab3 where user_id=31 GROUP
BY state, amount>0;
And post the results.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance