RAM - 256GB
shared_buffers - 64GB
maintenance_work_mem - 1GB
work_mem - 24MB
there are ~50 active connections at any given time.
from pg_badger - average temp file size for bulk insert/update is about 200MB, max size is multiple GB's depending on the table.
it is a write heavy workload - with inserts/updates happening around the clock.
thanks
karthik
---- On Thu, 23 Mar 2023 15:33:11 +0530 MichaelDBA <MichaelDBA@xxxxxxxxxxx> wrote ---
Perhaps take the other approach: increase work_mem to make the bulk inserts fit into memory. You can easily undo work_mem changes. It only requires a sighup: reload, not restart.What memory do you have now and what is work_mem currently set to? Also, have many concurrent, active connections do you average at a time?Regards,Michael VitaleKarthik Krishnakumar wrote on 3/23/2023 5:59 AM:thanks - checked with the devs and it does look like the application is doing some sort of a bulk insert, and at the moment it cannot be changed to use "COPY FROM".will limiting bulk inserts to match the work_mem(assuming this is the guc that is used in this case) reduce this disk activity?thanks,karthik---- On Thu, 23 Mar 2023 14:43:19 +0530 <Michaeldba@xxxxxxxxxxx> wrote ---Look at the size of the bind variables and also whether you are doing bulk insertsSent from my iPadOn Mar 23, 2023, at 5:10 AM, Karthik Krishnakumar <karthikk@xxxxxxxxxxxx> wrote:Hi admins,I am analyzing temp files being generated using pg_badger.Under "queries generating the most temporary files", I am finding insert/update queries (via prepared statements) writing a lot of data to temp files.I am trying to figure why inserts are generating such temp files.Why are inserts/updates generating temp files?what can be done to avoid this ?thanks,karthik