Re: buffercache/bgwriter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cédric,

OK, sounds promising. But all of these improvements are for the postgres developers.
For me as an administrator I can't do a thing right now. OK.

Thanks for you suggestions. I think for batchjobs other that just COPY they could speed up the process quite well if now the backend process has to do all (or 50%)  of the writings.

It would also be good to see how many buffers were written by backend processes grouped by Buffer Access Strategy - to better distinguish evil backend writes from wanted backend writes.

Best Regards,
Uwe

On 23 March 2011 21:23, Cédric Villemain <cedric.villemain.debian@xxxxxxxxx> wrote:
2011/3/23 Uwe Bartels <uwe.bartels@xxxxxxxxx>:
> On 23 March 2011 16:36, Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
>>
>> On Wed, Mar 23, 2011 at 6:19 AM, Jochen Erwied
>> <jochen@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> > Wednesday, March 23, 2011, 1:51:31 PM you wrote:
>> >
>> > [rearranged for quoting]
>> >
>> >> background writer stats
>> >>  checkpoints_timed | checkpoints_req | buffers_checkpoint |
>> >> buffers_clean |
>> >> maxwritten_clean | buffers_backend | buffers_alloc
>> >>
>> >> -------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------
>> >>                  3 |               0 |              99754 |
>> >> 0
>> >> |                0 |          115307 |        246173
>> >> (1 row)
>> >
>> > buffers_clean = 0 ?!
>> >
>> >> But I don't understand how postgres is unable to fetch a free buffer.
>> >> Does any body have an idea?
>> >
>> > Somehow looks like the bgwriter is completely disabled. How are the
>> > relevant settings in your postgresql.conf?
>>
>> I suspect the work load is entirely bulk inserts, and is using a
>> Buffer Access Strategy.  By design, bulk inserts generally write out
>> their own buffers.
>>
>> Cheers,
>>
>> Jeff
>
> Yes. that's true. We are converting databases from one schema into another
> with a lot of computing in between.
> But most of the written data is accessed soon for other conversions.
> OK. That sounds very simple and thus trustable ;).

yes, it is.

>
> So everything is fine and there is no need/potential for optimization?
>

There are probably room for improvements, without more thinking, I
would suggest:

 * review bufferstrategy to increase the buffer size for the pool when
there is a lot of free buffers
* have a bgwriter working just behind the seqscan (and probably a
biger pool of buffers anyway)
* do not use  the special bufferstrategy when  the buffer cache has
more than X% of free pages
* add more :)

I believe it should be ok to do good improvement for special case
easely identifiable like yours.

--
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux