Hi Scott,
Thaks for the clear answers!
Scott Carey wrote:
You must either increase the memory that ZFS uses, or
increase Postgresql shard_mem and work_mem to get the aggregate of the
two to use more RAM.
I believe, that you have not told ZFS to reserve 8GB, but rather told
it to limit itself to 8GB.
That is correct, but since it will use the whole 8 GB anyway, I can
just as easily say that it will reseve that memory ;)
Some comments below:
On Thu, Oct 30, 2008 at 8:15 AM, Christiaan
Willemsen <cwillemsen@xxxxxxxxxxxxx>
wrote:
Hi
there,
I configured OpenSolaris on our OpenSolaris Machine. Specs:
2x Quad 2.6 Ghz Xeon
64 GB of memory
16x 15k5 SAS
If you do much writing, and even moreso with ZFS, it is critical
to put the WAL log on a different ZFS volume (and perhaps disks) than
the data and indexes.
I already did that. I also have a separate disk pair for the ZFS intent
log.
Are you counting both the memory used by
postgres and the memory used by the ZFS ARC cache? It is the
combination you are interested in, and performance will be better if it
is biased towards one being a good chunk larger than the other. In my
experience, if you are doing more writes, a larger file system cache is
better, if you are doing reads, a larger postgres cache is better (the
overhead of calling read() in 8k chunks to the os, even if it is
cached, causes CPU use to increase).
No, the figure I gave is this is without the ARC cache.
If you do very large aggregates, you may
need even 1GB on work_mem. However, a setting that high would require
very careful tuning and reduction of space used by shared_buffers and
the ZFS ARC. Its dangerous since each connection with a large
aggregate or sort may consume a lot of memory.
Well, some taks may need a lot, but I guess most wil do fine with the
settings we used right now.
So It looks like I can tune the ARC to use more memory, and also
increase shared_mem to let postgres cache more tables?
|