On 15 November 2014 02:10, Alexey Vasiliev <leopard_ne@xxxxxxxx> wrote: > Ok. Just need to know what think another developers about this - should pgtune care about this case? Because I am not sure, what users with 512GB will use pgtune. pgtune should certainly care about working with large amounts of RAM. Best practice does not stop at 32GB of RAM, but instead becomes more and more important. I am not interested in edge cases or unusual configurations. I am interested in setting decent defaults to provide a good starting point to administrators on all sizes of hardware. I use pgtune to configure automatically deployed cloud instances. My goal is to prepare instances that have been tuned according to best practice for standard types of load. Administrators will ideally not need to tweak anything themselves, but at a minimum have been provided with a good starting point. pgtune does a great job of this, apart from the insanely high shared_buffers. At the moment I run pgtune, and then must reduce shared_buffers to 8GB if pgtune tried to select a higher value. The values it is currently choosing on higher RAM boxes are not best practice and quite wrong. The work_mem settings also seem to be very high, but so far have not posed a problem and may well be correct. I'm trusting pgtune here rather than my outdated guesses. -- Stuart Bishop <stuart@xxxxxxxxxxxxxxxx> http://www.stuartbishop.net/ -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance