Search Postgresql Archives

Re: problem with lost connection while running long PL/R query

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thanks for the help. You have definitely identified the problem, but I am still looking for a solution that works for me. I tried setting vm.overcommit_memory=2, but this just made the query crash quicker than before, though without killing the entire connection to the database. I imagine that this means that I really am trying to use more memory than the system can handle?

I am wondering if there is a way to tell postgresql to flush a set of table lines out to disk so that the memory they are using can be liberated. Basically, I am creating my table with 8 million lines in 4 chunks, and each chunk fits in memory. As the 4 are inside one query, I imagine that at some point postgresql tries to put the four together and write them to disk as a postgresql table, and that is when things break. Is it possible to tell postgresql to write one "chunk" at a time? I imagine I could do this using PL/PGSQL and iteration, but perhaps there is a postgresql configuration parameter that can help with this?

Thanks again,
David


On 05/16/2013 03:35 PM, Ryan Kelly wrote:
On Thu, May 05/16/13, 2013 at 02:47:28PM +0200, David M. Kaplan wrote:
Hi,

I have a query that uses a PL/R function to run a statistical model
on data in a postgresql table.  The query runs the function 4 times,
each of which generates about 2 million lines of results, generating
a final table that has about 8 million lines.  Each time the
function is called, it takes about ~3-5 minutes to finish its work.
I know that the function is running fine as it gives me progress and
I have had no trouble using it on slightly smaller datasets.
However, this time it does the 4 runs, but ends with psql saying the
connection to the server has been lost.  In the postgresql log file,
I have:

2013-05-16 14:05:47 CEST LOG:  server process (PID 27488) was terminated by signal 9: Killed
This usually indicates the OOM killer has killed your backend. Please
see the documentation here:

http://www.postgresql.org/docs/9.2/interactive/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

-Ryan



--
**********************************
David M. Kaplan
Charge de Recherche 1

Institut de Recherche pour le Developpement
Centre de Recherche Halieutique Mediterraneenne et Tropicale
av. Jean Monnet
B.P. 171
34203 Sete cedex
France

Phone: +33 (0)4 99 57 32 27
Fax: +33 (0)4 99 57 32 95

http://www.umr-eme.org/team/dkaplan/
http://www.amped.ird.fr/
**********************************



--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux