Re: requested shared memory size overflows size_t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tom Wilcox wrote:
default_statistics_target=10000
wal_buffers=1GB
max_connections=3
effective_cache_size=15GB
maintenance_work_mem=5GB
shared_buffers=7000MB
work_mem=5GB

That value for default_statistics_target means that every single query you ever run will take a seriously long time to generate a plan for. Even on an OLAP system, I would consider 10,000 an appropriate setting for a column or two in a particularly troublesome table. I wouldn't consider a value of even 1,000 in the postgresql.conf to be a good idea. You should consider making the system default much lower, and increase it only on columns that need it, not for every column on every table.

There is no reason to set wal_buffers larger than 16MB, the size of a full WAL segment. Have you read http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server yet? checkpoint_segments is the main parameter you haven't touched yet you should consider increasing. Even if you have a low write load, when VACUUM runs it will be very inefficient running against a large set of tables without the checkpoint frequency being decreased some. Something in the 16-32 range would be plenty for an OLAP setup.

At 3 connections, a work_mem of 5GB is possibly reasonable. I would normally recommend that you make the default much smaller than that though, and instead just increase to a large value for queries that benefit from it. If someone later increases max_connections to something higher, your server could run completely out of memory if work_mem isn't cut way back as part of that change.

You could consider setting effective_cache_size to something even larger than that,

EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org WHERE match_data_id < 100000;

By the way--repeatedly running this form of query to test for improvements in speed is not going to give you particularly good results. Each run will execute a bunch of UPDATE statements that leave behind dead rows. So the next run done for comparison sake will either have to cope with that additional overhead, or it will end up triggering autovacuum and suffer from that. If you're going to use an UPDATE statement as your benchmark, at a minimum run a manual VACUUM ANALYZE in between each test run, to level out the consistency of results a bit. Ideally you'd restore the whole database to an initial state before each test.

I will spend the next week making the case for a native install of Linux, but first we need to be 100% sure that is the only way to get the most out of Postgres on this machine.

I really cannot imagine taking a system as powerful as you're using here and crippling it by running through a VM. You should be running Ubuntu directly on the hardware, ext3 filesystem without LVM, split off RAID-1 drive pairs dedicated to OS and WAL, then use the rest of them for the database.

--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux