<<Is each of these write operations just covering a single row? Does this description apply to just one of the many (how many?) databases, so that there are really 14*N concurrent sessions? >> All writes are single row. All DB’s have exactly the same structure, only the content is different. Currently the server is hosting five active DB’s – although there 14 DB’s actually on the host, the balance are backups and or testing environments. When a feed comes in, it can be anything from dozens to millions of rows, and may take minutes or days to run. I had asked that PG bouncer be installed in front of the host to act as a traffic cop. Try as I may to convince the engineering team that fewer sessions running faster is optimal, they say that the 14 concurrent sessions is based on real-world experience of what imports the fastest. << You really need to know whether those reads and writes are concentrated in a small region (relative to the amount of your RAM), or widely scattered. If you are reading and writing intensively (which you do seem to be doing) but only within a compact region, then it should not drive other data out of the cache. But, since you do seem to have IO problems from cache misses, and you do have a high level of activity, the easy conclusion is that you have too little RAM to hold the working size of your data. It won’t be a problem of physical RAM, I believe there is at least 32GB of RAM. What constitutes “a compact region”? The ETL process takes the feed and distributes it to 85 core tables. I have been through many PG configuration cycles with the generous help of people in this forum. I think the big problem when getting help has been this issue of those offering assistance understanding that the whopping majority of the time, the system is performing single row reads and writes. The assumption tends to be that the end point of an ETL should just be a series of COPY statements, and it should all happen very quickly in classic SQL bulk queries. |