Search Postgresql Archives

Re: Is there any method to limit resource usage in PG?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi:

Now the situation goes there:
In the testing environment, 
even when my customer changed  shared_buffers from 1024MB to 712MB or 512MB,
The total  memory consumption is still  almost the same.

I think that PG is always using  as much resource as it can, 
For a query and insert  action, 
Firstly , the data is pull into private memory  of  the backend process which is service client.
Then,  the backend process push the data into  shared memory, here into shared_buffers.
If  the shared_buffers is not big enough to hold all the result data, then part of data will be in shared_buffer,
the other data will still remain in backend process's memory.

Is my understanding right?

Best Regard


2013/8/27 Jeff Janes <jeff.janes@xxxxxxxxx>
On Sun, Aug 25, 2013 at 11:08 PM, 高健 <luckyjackgao@xxxxxxxxx> wrote:
> Hello:
>
> Sorry for disturbing.
>
> I am now encountering a serious problem: memory is not enough.
>
> My customer reported that when they run a program they found the totall
> memory and disk i/o usage all reached to threshold value(80%).
>
> That program is written by Java.
> It is to use JDBC to pull out data from DB, while the query joined some
> table together,  It will return about  3000,000 records.
> Then the program will use JDBC  again to write the records  row by row , to
> inert into another table in the DB.

What is using the memory, the postgres backend or the client program?

Cheers,

Jeff


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux