Gary Doades wrote:
How much RAM can a single postgres backend use? I've just loaded a moderately sized dataset into postgres and was applying RI constraints to the tables (using pgadmin on windows). Part way though I noticed the (single) postgres backend had shot up to using 300+ MB of my RAM!
Oops - guess that's why they call it a Beta. My first guess was a queue of pending foreign-key checks or triggers etc. but then you go on to say...
Since I can't get an explain of what the alter table was doing I used this: select count(*) from booking_plan,reqt_dates where booking_plan.reqt_date_id = reqt_dates.reqt_date_id and sure enough this query caused the backend to use 300M RAM. The plan for this was:
[snip]
I then analysed the database. Note, there are no indexes at this stage except the primary keys. the same query then gave:
[snip]
This is the same set of hash joins, BUT the backend only used 30M of private RAM.
I'm guessing in the first case that the default estimate of 1000 rows in a table means PG chooses to do the join in RAM. Once it knows there are a lot of rows it can tell not to do so.
However, I thought PG was supposed to spill to disk when the memory required exceeded config-file limits. If you could reproduce a simple test case I'm sure someone would be interested in squashing this bug.
-- Richard Huxton Archonet Ltd