On Thu, Jan 18, 2018 at 12:13 PM, Davlet Panech <dpanech@xxxxxxxxx> wrote:
KeithOn 1/17/2018 5:57 PM, scott ribe wrote:
On Jan 17, 2018, at 2:57 PM, Davlet Panech <dpanech@xxxxxxxxx> wrote:There must be a way to put an upper limit on memory even for such cases. I was under the impression that parameters such as "work_mem" serve that purpose, is that not the case? So an "accidental cross product" join's memory usage is unbounded? It can't be... could somebody confirm this please?
Does my configuration look reasonable? I just don't understand how it could possibly use up 19 GB of memory based on the configuration below. Is there a memory leak in there somewhere?
It does seem awfully high, but... An update can involve a join across multiple tables. Or an update can run a trigger which can cascade. Either of those could result in an "accidental cross product" join, which can always blow up memory.
Thanks,
D.
work_mem isn't really an upper limit on overall memory usage. It's just an upper limit on how much is used in certain processes before spilling to disk. A query or group of queries can easily use up all of system memory if it's complex enough by using multiple instances of work_mem. This is why work_mem shouldn't be set any higher than necessary. The wiki explains this better
"This size is applied to each and every sort done by each user, and
complex queries can use multiple working memory sort buffers. Set it to
50MB, and have 30 users submitting queries, and you are soon using 1.5GB
of real memory. "
I would go with Tom's suggestion in this case, though, since that bug seems to fit the situation described by the patch he found. It's always important to be running the latest patch release to rule out a bug being the cause of an issue.