Re: Optimisation help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks i'm trying with this new settings. I gain only 3 second (2:40 vs 2:37 min) on a treatment of 1000 lines, with it's done every 2 minutes.

For the database version, i'm under postgresql 8.1.11. x64

As i'm in a procedure it seems that postgresql explain analyse doesn't give details.

I suppose that I have to fragment my procedure to see exactly where i'm wasting so much time.

regards

david

Greg Smith a écrit :
On Tue, 4 Mar 2008, dforums wrote:

max_connections = 256
shared_buffers = 1500 # min 16 or max_connections*2, 8KB each
work_mem = 22000                        # min 64, size in KB
effective_cache_size = 2048             # typically 8KB each

Well, you're giving the main database server a whopping 1500*8K=12MB of space to work with. Meanwhile you're giving each of the 256 clients up to 22MB of work_mem, which means they can use 5.6GB total. This is quite backwards.

Increase shared_buffers to something like 250000 (2GB), decrease work_mem to at most 10000 and probably lower, and raise effective_cache_size to something like 5GB=625000. Whatever data you've collected about performance with your current settings is pretty much meaningless with only giving 12MB of memory to shared_buffers and having a tiny setting for effective_cache_size.

Oh, and make sure you ANALYZE your tables regularly.

random_page_cost = 3

And you shouldn't be playing with that until you've got the memory usage to something sane.

Also, you didn't mention what version of PostgreSQL you're using. You'll need 8.1 or later to have any hope of using 8GB of RAM effectively on a 4-core system.

But My most fear is that for now the database is only of 10 Go. But I will have to increase it 10 times during the next six month I'm afraid that these problems will increase.

It's very unlikely you will be able to get good performance on a 100GB database with a single SATA drive. You should be able to get great performance with the current size though.

In regards of update, I have around 10000 updates while a laps of 10 minutes. Is there a settings to optimise updates ?

10000 updates / 600 seconds = 17 updates/second. That's trivial; even a single boring drive can get 100/second. As someone already suggested your real problem here is that you'll be hard pressed to handle the amount of seeking that goes into a larger database with only a single drive.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance



--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your Subscription:
http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.org&extra=pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux