Search Postgresql Archives

Re: Bug? Query plans / EXPLAIN using gigabytes of memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/04/12 11:26, Rob Sargentg wrote:
On 04/29/2012 07:19 PM, Toby Corkindale wrote:
On 27/04/12 09:33, Tom Lane wrote:
Toby Corkindale<toby.corkindale@xxxxxxxxxxxxxxxxxxxx> writes:
I've created a bit of a test case now.
There's a Perl script here:
http://dryft.net/postgres/

AFAICT, what is happening is that we're repeating the planning of that
messy nest of views for each child table of foo. For most of the
children the planner eventually decides that the join degenerates to
nothing because of constraint exclusion, but not until it's expended a
fair amount of time and memory space per child.

I looked at whether we could improve that by having inheritance_planner
use a temporary memory context per child, but that doesn't look very
practical: it would add a good deal of extra data-copying overhead,
and some of the data structures involved are not easily copiable.

The general scheme of replanning per child might be questioned as well,
but IMO it's fairly important given the looseness of inheritance
restrictions --- it's not unlikely that you *need* different plans for
different children. We might be able to reconsider that approach
whenever we invent an explicit concept of partitioned tables, since
presumably the partitions would all be essentially alike.

In the meantime, the best advice I can come up with is to reconsider
whether you need so many partitions. That mechanism is really designed
for only a dozen or two partitions at most.


Hi Tom,
Thanks for looking into this, I appreciate you spending the time.

The system I've come up with for partitioning this data requires quite
a lot of partitions - say thirty to seventy - but I didn't realise it
would cause trouble down the line, so I'll see if it can be reworked
to reduce the number.

For what it's worth, the actual query that was blowing out to
gigabytes was only hitting a couple of dozen partitions per table it
was touching - but it was hitting three such tables, about sixteen
times (!) each.

I'm still curious about why I can do a SELECT * FROM complexview
without using much memory, but an UPDATE foo FROM complexview causes
all the memory to get exhausted?

Thanks,
Toby

Does

UPDATE foo set <values>
where foo.id in (select id from complexview...)

also swallow the memory?

Yes, definitely. (See an earlier post of mine for several variations on the query)

However a two-stage process doesn't, ie.
create temp table as select id from complexview;
update foo where id in (select id from complexview);
(or the same thing with FROM)


--
.signature

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux