Re: two memory-consuming postgres processes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <deliverable@xxxxxxxxx> wrote:
>
>  So how should I divide say a 512 MB between shared_buffers and, um, what
> else?  (new to pg tuning :)

Don't worry so much about the rest of the settings.  Maybe increase
sort_mem (aka work_mem) to something like 16M or so.  that's about it.

>  I naively thought that if I have a 100,000,000 row table, of the form
> (integer,integer,smallint,date), and add a real coumn to it, it will scroll
> through the memory reasonably fast.

This is a database.  It makes changes on disk in such a way that they
won't be lost should power be cut off.  If you're just gonna be batch
processing data that it's ok to lose halfway through, then python /
perl / php etc might be a better choice.

>  Yet when I had shared_buffers=128 MB,
> it was hanging there 8 hours before I killed it, and now with 1500MB is
> paging again for several hours with no end in sight.

You went from kinda small to WAY too big.  512M should be a happy medium.

>  Why can't it just add
> a column to a row at a time and be done with it soon enough? :)

Adding a column is instantaneous.  populating it is not.

> It takes
> inordinately long compared to a FORTRAN or even python program and there's
> no index usage for this table, a sequential scan, why all the paging?

Again, a database protects your data from getting scrambled should the
program updating it quit halfway through etc...

Have you been vacuuming between these update attempts?  Each one has
created millions of dead rows and bloated your data store.  vacuum
full / cluster / reindex may be needed.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux