Search Postgresql Archives

How should I specify work_mem/max_worker_processes if I want to do big queries now and then?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I am doing a query to fetch about 10000000 records in one time. But the query seems very slow, like "mission impossible".
I am very confident that these records should be fit into my shared_buffers settings(20G), and my query is totally on my index, which is this big:(19M x 100 partitions), this index size can also be put into shared_buffers easily.(actually I even made a new partial index which is smaller and delete the bigger old index)

This kind of situation makes me very disappointed.How can I make my queries much faster if my data grows more than 10000000 in one partition? I am using pg11.6.

Many thanks,
James

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux