Hello Ben,
Looks like you need to tune autovacuum to be more aggressive. Make sure autovacuum=ON (the default), increase autovacuum_max_workers (at least 1 per database, more if autovacuum is falling behind), autovacuum_vacuum_scale_factor to be ~half of the default and can be set per table to be lower for large tables, autovacuum_vacuum_scale_factor to be ~half of the default.
You can run the following SQL to see the last time each table was vacuumed. If it's still not frequent enough you can try to decrease autovacuum_naptime.
This SQL is from Bucardo's check_postgres:
SELECT current_database() AS datname,
nspname AS sname,
relname AS tname,
CASE
WHEN v IS NULL THEN -1
ELSE round(extract(epoch
FROM now()-v))
END AS ltime,
CASE
WHEN v IS NULL THEN '?'
ELSE TO_CHAR(v, 'HH24:MI FMMonth DD, YYYY')
END AS ptime
FROM
(SELECT nspname,
relname,
GREATEST(pg_stat_get_last_analyze_time(c.oid), pg_stat_get_last_autoanalyze_time(c.oid)) AS v
FROM pg_class c,
pg_namespace n
WHERE relkind = 'r'
AND n.oid = c.relnamespace
AND n.nspname <> 'information_schema'
ORDER BY 3) AS foo;
And review the bloat of each table (should be ~1. If far above 1 vacuum is falling behind):
SELECT schemaname,
relname,
(pg_relation_size(relid)) AS table_bytes,
n_live_tup,
n_dead_tup,
(n_live_tup::float+n_dead_tup)/n_live_tup AS bloat
FROM pg_stat_user_tables
WHERE n_live_tup>(current_setting('autovacuum_vacuum_threshold')::bigint*10)+1;
Will J. Dunn
On Mon, May 18, 2015 at 3:51 AM, ben.play <benjamin.cohen@xxxxxxxxxxxx> wrote:
Thank you for your quick answer !
And ... you are a genius :)
A simple "analyse
" resolved my problem.
Do We have to do it regularly ?
Thank you a lot !
--
View this message in context: http://postgresql.nabble.com/My-index-doesn-t-write-anymore-but-read-tp5849689p5849699.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general