I am doing some benchmarks tests with diferent configurations...
I'll try what you said Kevin.
Thank you All
2008/6/30 Kevin Neufeld <kneufeld@xxxxxxxxxxxxxxx>:
I agree. I have a similar system that I use for development purposes and have the shared_buffers sitting comfortable around 1GB. On production systems with 16GB of RAM, I've seen this as high as 12GB. There is talk nowadays, however, that this setting could drop back down to defaults on modern installations and let the OS handle cached memory as it sees fit.
In any case, the biggest performance gain I see for you would be setting work_mem appropriately. This is the memory postgres is permitted to use for sorts, merges, hash joins, etc. before being forced to disk. It defaults to 1MB. In my opinion, this is far too low. This is the memory allocated to each sort/hash/etc operation. So for a complicated query, postgres could use several allocations. Even though, I think you could raise this considerably. If your system is a dedicated postgres box, I would take the total memory, subtract that needed for the OS, subtract what you decided to use for shared_buffers, and divide the rest by your 100 connections. So, for you, I see this around 30MB. On my development box with only a few connections, I have this around 500MB and sometime spike it to 1.2GB on the fly before a long running query.
Cheers,
Kevin
Alvaro Herrera wrote:
Rafael Domiciano escribió:
The Postgres version is 8.3.3 and I am using Fedora Core 8.
I have in the actual server around 70 connections the same time. I am
assigning for this 100.
100 MB? That's not very much either. You can probably get a hefty
performance boost by upping it a lot more (depending on whether the
machine is doing something else, or Postgres is the only service running
on it.)
--Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin