Hmm, I continued some testing. Now, strangely, the congestion_wait occurs even if free -m shows me about 1500 Mbytes (before that I tried to "fill up" the cache by doing some plain "cat from_here > to_there" ... which pushed free down to 1500). But, interestingly, the value also doesn't gets lower. So, it stays around 1500 ... seems like the same scenario as before, when it stopped around 120-130 ... So, obviously still some struggling for memory. But not as hard like when it is completely down. Now the file grows quite steadily ... and congestion_wait is not always present in PS ... (finished after 6 1/2 minutes instead the optimum around 2 1/2). Something very fishy about the memory manager .... Andras Fabian -----Ursprüngliche Nachricht----- Von: Andras Fabian Gesendet: Dienstag, 13. Juli 2010 13:35 An: 'Craig Ringer' Cc: pgsql-general@xxxxxxxxxxxxxx Betreff: AW: AW: AW: AW: PG_DUMP very slow because of STDOUT ?? I have just rechecked one of our old generation machines, which never had/have this problem (where the backup of a 100 GB database - to a 10 GByte dump - is still going trough in about 2 hours). They seem to have this high caching ratio too (one of the machine says it has 15 GByte in cache out of 16) ... but still, they manage to gracefully give back RAM to whoever needs it (i.e. the backup process with its COPY-to-STDOU or others). Well, the difference is of course 6 Kernel versions (old machine 2.6.26 ... new machin 2.6.32) ... or who knows what kernel settings ... Andras Fabian -----Ursprüngliche Nachricht----- Von: Craig Ringer [mailto:craig@xxxxxxxxxxxxxxxxxxxxx] Gesendet: Dienstag, 13. Juli 2010 12:51 An: Andras Fabian Cc: pgsql-general@xxxxxxxxxxxxxx Betreff: Re: AW: AW: AW: PG_DUMP very slow because of STDOUT ?? On 13/07/2010 6:26 PM, Andras Fabian wrote: > Wait, now, here I see some correlation! Yes, it seems to be the memory! When I start my COPY-to-STDOUT experiment I had some 2000 MByte free (well ,the server has 24 GByte ... maybe other PostgreSQL processes used up the rest). Then, I could monitor via "ll -h" how the file nicely growed (obviously no congestion), and in parallel see, how "free -m" the "free" memory went down. Then, it reached a level below 192 MByte, and congestion began. Now it is going back and forth around 118-122-130 ... Obviously the STDOUT thing went out of some memory resources. > Now I "only" who and why is running out, and how I can prevent that. > Could there be some extremely big STDOUT buffering in play ???? Remember, "STDOUT" is misleading. The data is sent down the network socket between the postgres backend and the client connected to that backend. There is no actual stdio involved at all. Imagine that the backend's stdout is redirected down the network socket to the client, so when it sends to "stdout" it's just going to the client. Any buffering you are interested in is in the unix or tcp/ip socket (depending on how you're connecting), in the client, and in the client's output to file/disk/whatever. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general