perhaps kernel parameters under control of
vmware: shmmax/shmall.
Regards,
Michael Vitale
Thanks for the responses.
We'll take this
up with VMware support and then if it isn't a configuration issue, move
it along to Redhat Linux support.
It was also useful to learn
cgroups default support within the kernel can use up so much memory on a
system with larger RAM. On the next reboot this will free up about 1.6
GB RAM. It might help with a little wiggle room until we know more
about the other issue which seems to limit us to 1/2 our RAM.
frank picabia <fpicabia@xxxxxxxxx>
writes:
> My VMware admin has come back with a graph showing memory use over
> the period in question. He has looked over other indicators
> and there are no alarms triggered on the system.
> It jives with what Cacti reported. Memory was never exhausted
> and used only 50% of allocated RAM at the most.
> If it's not a configuration issue in Postgres, and both internal
and
> external tools
> show memory was not consumed to the point of firing off the "cannot
fork"
> error, would that mean that there is a bug in either the kernel or
Postgres?
[ shrug... ] Postgres is just reporting to you that the kernel wouldn't
perform a fork(). Since you've gone to great lengths to show that
Postgres isn't consuming excessive resources, either this is a kernel
bug
or you're running into some kernel-level (not Postgres) allocation
limit.
I continue to suspect the latter. Desultory googling shows that VMware
can be configured to enforce resource allocation limits, so maybe you
should be taking a hard look at your VMware settings.
regards, tom lane
|