@Claudio So you are basically saying that if I have set effective_cache_size to 10GB and I have 10 concurrent processes which are using 10 different indices which are for example 2GB, it would be better to set the effective_cache size to 1GB? Since if I leave it at 10GB each running process query planner will think the whole index is in cache and that won't be true? Did I get that right?
@Jeff I have 4 drives in RADI10. The database has around 80GB of indices. I'm not experiencing any slow downs, I would just like to increase the performance of update/insert, since it needs to insert a lot of data and to make the select queries faster since they are done on a lot of big tables. I am experiencing a lot of performance problems when autovacuum kicks in for a few big tables, since it slows downs things a lot. I didn't notice any swapping and I know those 51MB which were swapped were just staying there, so swap isn't an issue at all.
Strahinja Kustudić | System Engineer | Nordeus
On Wed, Oct 10, 2012 at 9:30 PM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
On Wed, Oct 10, 2012 at 12:12 AM, Strahinja KustudićHow many drives in the RAID?
<strahinjak@xxxxxxxxxxx> wrote:
> Hi everyone,
>
> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
> 15K SCSI drives which is runing Centos 6.2 x64.
Are there a lot of indexes?
> This server is mainly used
> for inserting/updating large amounts of data via copy/insert/update
> commands, and seldom for running select queries.
Are you experiencing performance problems? If so, what are they?
>
> Here are the relevant configuration parameters I changed:
>
> shared_buffers = 10GB
> effective_cache_size = 90GB
> work_mem = 32MB
> maintenance_work_mem = 512MB
> checkpoint_segments = 64
> checkpoint_completion_target = 0.8
>
> My biggest concern are shared_buffers and effective_cache_size, should I
> increase shared_buffers and decrease effective_cache_size?
There are reports that large shared_buffers can lead to latency
> I read that
> values above 10GB for shared_buffers give lower performance, than smaller
> amounts?
spikes. I don't know how sensitive your work load is to latency,
though. Nor how much those reports apply to 9.1.
The kernel has, over the entire time the server has been up, found 51
>
> free is currently reporting (during the loading of data):
>
> $ free -m
> total used free shared buffers cached
> Mem: 96730 96418 311 0 71 93120
> -/+ buffers/cache: 3227 93502
> Swap: 21000 51 20949
>
> So it did a little swapping, but only minor,
MB of process memory to swap. That doesn't really mean anything. Do
you see active swapping going on, like with vmstat?
Cheers,
Jeff