Search Postgresql Archives

Re: General performance/load issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 24 novembre 2011 20:38, Gaëtan Allart <gaetan@xxxxxxxxxxx> a écrit :
> Finally, it "crashed" againŠ :-(
>
> Here's the output of iotop while databased was inaccessible :
>
> 32361 be/4 postgres    0.00 B/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(34847) idle
> 32244 be/4 postgres  163.48 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(34660) SELECT
> 32045 be/4 postgres    7.78 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(33765) SELECT
> 32158 be/4 postgres    7.78 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(34112) SELECT
> 32242 be/4 postgres    7.78 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(34632) SELECT
> 32372 be/4 postgres    0.00 B/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(38858) idle in transaction
> 32231 be/4 postgres   15.57 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(38602) SELECT
> 28811 be/4 postgres    3.89 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(40594) SELECT
> 32190 be/4 postgres    3.89 K/s    0.00 B/s  0.00 % 99.99 % postgres: mydb
> mydb host(38497) SELECT
>
>
>
> And the latest logs :
>
> LOG:  checkpoint complete: wrote 3192 buffers (0.2%); 0 transaction log
> file(s) added, 1 removed, 0 recycled; write=262.636 s, sync=135.456 s,
> total=416.630 s
> LOG:  checkpoint complete: wrote 716 buffers (0.0%); 0 transaction log
> file(s) added, 1 removed, 0 recycled; write=165.497 s, sync=17.111 s,
> total=193.199 s
>
>
> WARNING:  pgstat wait timeout
> STATEMENT:  SELECT 'commit',sum(pg_stat_get_db_xact_commit(oid)) FROM
> pg_database WHERE datname=$1
>                      UNION ALL
>                     SELECT
> 'rollback',sum(pg_stat_get_db_xact_rollback(oid)) FROM pg_database WHERE
> datname=$2
>
> LOG:  checkpoint complete: wrote 699 buffers (0.0%); 0 transaction log
> file(s) added, 0 removed, 0 recycled; write=203.023 s, sync=119.037 s,
> total=335.012 s
> LOG:  checkpoint complete: wrote 348 buffers (0.0%); 0 transaction log
> file(s) added, 0 removed, 0 recycled; write=120.412 s, sync=0.020 s,
> total=120.435 s
>
>
>
>
>
> Does this help?

yes.
It seem you have an issue with your checkpoint syncing time, it is
fixed in 9.1 and backported in 9.0 here :
http://projects.2ndquadrant.com/backports

It is possible you have other problems that explains the issue you
have. An immediate solution before trying a patch is to reduce your
shared_buffer setting to something very low, like 1GB.


>
> Gaëtan
>
> Le 24/11/11 17:02, « Tomas Vondra » <tv@xxxxxxxx> a écrit :
>
>>On 24 Listopad 2011, 16:39, Robert Treat wrote:
>>> On Thu, Nov 24, 2011 at 9:09 AM, Tomas Vondra <tv@xxxxxxxx> wrote:
>>>> On 24 Listopad 2011, 14:51, Gaëtan Allart wrote:
>>>>> Postgresql.conf :
>>>>>
>>>>> max_connections = 50
>>>>> shared_buffers = 12G
>>>>> temp_buffers = 40MB
>>>>> work_mem = 128MB
>>>>> maintenance_work_mem = 256MB
>>>>> max_files_per_process = 8192
>>>>> checkpoint_segments = 256
>>>>> checkpoint_timeout = 30min
>>>>> checkpoint_completion_target = 0.9
>>>>
>>>> Fine. Let's see the options that look suspicious.
>>>>
>>>
>>> I think you missed some suspicious settings... I'd recommend setting
>>> shared buffers to 8gb, and I'd likely reduce checkpoint segements to
>>> 30 and set the checkpoint timeout back to 5 minutes. Everything about
>>> the way this server is configured (including those vm settings) is
>>> pushing it towards delaying the WAL/Buffer/Checkpoint as long as
>>> possible, which matches with the idea of good performance initial
>>> followed by a period of poor performance and heavy i/o.
>>
>>Yes, checkpoints were my first thought too. OTOH the OP reported that most
>>of the I/O is caused by WAL writer - that's not exactly the part that does
>>the work during checkpoint. Plus the WAL may not be postponed, as it's
>>usually O_DIRECT and fsynced, right.
>>
>>You're right that the writes are postponed, but I generally see that as a
>>good thing when combined with spread checkpoints. And even with those vm
>>settings (about 3.2GB for background writes), I wouldn't expect this
>>behaviour (because the page cache usually expires after 30 seconds). Say
>>you need 100% of the shared buffers is dirty and need to be written. You
>>have 27 minutes (30*0.9) to do that - that means about 8MB/s. With 30
>>seconds expire there might be about 240MB before the pdflush starts to
>>write the data to the SSD. And that can surely handle more than 50MB/s. So
>>why the long delay? The question is what else is going on there.
>>
>>But all this is just guessing - I want to see the log_checkpoint message,
>>iostat results etc.
>>
>>> On a side note, I'd guess your work_mem is probably too high. 50
>>> (connections) x 128 (mb work mem) x 2 (sorts per query) = 12GB RAM,
>>> which is 25% of total ram on the box. That doesn't necessarily mean
>>> game over, but it seem like it wouldn't be that hard to get thrashing
>>> being set up that way. YMMV.
>>
>>This is one of the reasons why effective_cache_size should be lower than
>>32GB, probably ...
>>
>>Tomas
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux