James(王旭) schrieb am 20.11.2019 um 08:56: > I am doing a query to fetch about 10000000 records in one time. But > the query seems very slow, like "mission impossible". I am very > confident that these records should be fit into my shared_buffers > settings(20G), and my query is totally on my index, which is this > big:(19M x 100 partitions), this index size can also be put into > shared_buffers easily.(actually I even made a new partial index which > is smaller and delete the bigger old index) > > This kind of situation makes me very disappointed.How can I make my > queries much faster if my data grows more than 10000000 in one > partition? I am using pg11.6. max_worker_processes can't be changed without a restart. But work_mem and max_parallel_workers_per_gather can be changed for a single session. If you normally don't need parallel execution, I would suggest to configure max_worker_processes and max_parallel_workers to a sensible "maximum" value. Set max_parallel_workers_per_gather to a very low value or even 0 if you want to disable it by default. Then, when you run a really "big" query, you can set max_parallel_workers_per_gather to a sensible value to make the query use parallel execution and increase work_mem so that potential sort, hash or grouping operations don't spill to disk. To give you some hints there we would need to see the current execution plan generated using explain (analyze, buffers, format text). If you turn on track_io_timing before that, it would be even more helpful. Thomas