Thanks for your answers.
I have tried via
--show work_mem; "10485kB" -> initial work_mem for my first post
-- set session work_mem='100000kB';
-- set session geqo_threshold = 12;
-- set session join_collapse_limit = 15;
-- set session work_mem='100000kB';
-- set session geqo_threshold = 12;
-- set session join_collapse_limit = 15;
I have a small machine, with SSD disk and 8GB RAM. I cannot really increase work_mem up to 2GB (or more). There are only 300 000 data in occtax.observation, which will increase (and possibly go up to 3 millions...)
I am running PostgreSQL 9.6. I should probably test it against PostgreSQL 11 as many improvements has been made.
I even tried to remove all non aggregated columns and keep only o.cle_obs (the primary key) to have a
GROUP BY o.cle_obs
AND the query plan does not show a HASH AGGREGATE, but only a GROUP AGGREGATE.
Obviously I have already tried to VACUUM ANALYSE
My current PostgreSQL settings
max_connections = 100
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 10485kB
maintenance_work_mem = 512MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 10485kB
maintenance_work_mem = 512MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100