Re: Postgres backend using huge amounts of ram

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gary Doades <gpd@xxxxxxxxxxxx> writes:
> I've just loaded a moderately sized dataset into postgres and was
> applying RI constraints to the tables (using pgadmin on windows). Part
> way though I noticed the (single) postgres backend had shot up to using
> 300+ MB of my RAM!

> Since I can't get an explain of what the alter table was doing I used this:

[ looks in code... ]  The test query for an ALTER ADD FOREIGN KEY looks
like

	 	SELECT fk.keycols FROM ONLY relname fk
	 	 LEFT OUTER JOIN ONLY pkrelname pk
	 	 ON (pk.pkkeycol1=fk.keycol1 [AND ...])
	 	 WHERE pk.pkkeycol1 IS NULL AND
	 	 (fk.keycol1 IS NOT NULL [AND ...])

It's also worth noting that work_mem is temporarily set to
maintenance_work_mem, which you didn't tell us the value of:

	/*
	 * Temporarily increase work_mem so that the check query can be
	 * executed more efficiently.  It seems okay to do this because the
	 * query is simple enough to not use a multiple of work_mem, and one
	 * typically would not have many large foreign-key validations
	 * happening concurrently.	So this seems to meet the criteria for
	 * being considered a "maintenance" operation, and accordingly we use
	 * maintenance_work_mem.
	 */

> I then analysed the database. ...
> This is the same set of hash joins, BUT the backend only used 30M of
> private RAM.

My recollection is that hash join chooses hash table partitions partly
on the basis of the estimated number of input rows.  Since the estimate
was way off, the actual table size got out of hand a bit :-(

			regards, tom lane


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux