Daniel Westermann <daniel.westermann@xxxxxxxxxxxxxxxx> writes:
>> Thank you, Merlin. As said I know that "not in" is not a good choice in this case but I still do not get what is going here. Why does the ><server repeatedly search for NULL values when I decrease work_mem and why not when increasing work_mem?
>The core point is that one plan is using a hashed subplan and the other is
>not, because the planner estimated that the hashtable wouldn't fit into
>work_mem. With a hashtable you'll have one probe into the hashtable per
>outer row, and each probe is O(1) unless you are unlucky about data
>distributions, so the runtime is more or less linear. Without a
>hashtable, the inner table is rescanned for each outer row, so the
>runtime is O(N^2) which gets pretty bad pretty fast. "Materializing"
>the inner table doesn't really help: it gets rid of per-inner-row
>visibility checks and some buffer locking overhead, so it cuts the
>constant factor some, but the big-O situation is still disastrous.
Thanks, Tom
>> Thank you, Merlin. As said I know that "not in" is not a good choice in this case but I still do not get what is going here. Why does the ><server repeatedly search for NULL values when I decrease work_mem and why not when increasing work_mem?
>The core point is that one plan is using a hashed subplan and the other is
>not, because the planner estimated that the hashtable wouldn't fit into
>work_mem. With a hashtable you'll have one probe into the hashtable per
>outer row, and each probe is O(1) unless you are unlucky about data
>distributions, so the runtime is more or less linear. Without a
>hashtable, the inner table is rescanned for each outer row, so the
>runtime is O(N^2) which gets pretty bad pretty fast. "Materializing"
>the inner table doesn't really help: it gets rid of per-inner-row
>visibility checks and some buffer locking overhead, so it cuts the
>constant factor some, but the big-O situation is still disastrous.
Thanks, Tom