"Shane | SkinnyCorp" <shanew@xxxxxxxxxxxxxx> writes: > The real issue is this, we have THE SAME queries taking anywhere from .001 - > 90.0 seconds... the server is using 98% of the available RAM at all times > (because of the persistant connections via php), and I don't know what to > do. I have a feeling that the answer is going to boil down to "buy more RAM" --- it sounds a lot like you're just overstressing your server. The more active backends you have, the more RAM goes to process-local memory, and the less is available for kernel disk cache. Even if you don't go into outright swapping, the amount of disk I/O needed goes up the smaller the kernel disk cache gets. Another possible line of attack is to use persistent (pooled) connections to cut down the number of live backend processes you need. However, depending on what your application software is, that might take more time/effort (= money) than dropping in some more RAM. You can investigate this theory by watching "top" output (the first few lines about memory usage, not the process listing) as well as "vmstat" output. > uptime: 12:23:08 up 132 days, 19:16, 2 users, load average: 19.75, > 17.34, 18.86 Load averages approaching 20 are not good either ... what sort of box are you running on anyway? As for the postgresql.conf settings, the only ones I'd seriously question are max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 1000 # min 100, ~50 bytes each These are the defaults, and are probably too small for a DB exceeding a hundred meg or so. max_files_per_process = 3052 # min 25 You really have your kernel set to support 3052 * 75 simultaneously open files? Back this off. I doubt values beyond a couple hundred buy anything except headaches. wal_buffers = 192 This is an order-of-magnitude overkill too, especially if your transactions are mostly small. I know it's only a megabyte or two, but you evidently need that RAM more elsewhere. enable_seqscan = false I don't think this is a good idea in general. regards, tom lane