On 10/22/2012 01:44 PM, Claudio Freire wrote:
I think, unless it gives you trouble with the page cache, numactl
--prefer=+0 should work nicely for postgres overall. Failing that,
numactl --interleave=all would, IMO, be better than the system
default.
Thanks, I'll consider that.
FWIW, our current stage cluster node is *not* doing this at all. In
fact, here's a numastat from stage:
node0 node1
numa_hit 1623243097 1558610594
numa_miss 257459057 310098727
numa_foreign 310098727 257459057
interleave_hit 25822175 26010606
local_node 1616379287 1545600377
other_node 264322867 323108944
Then from prod:
node0 node1
numa_hit 4987625178 3695967931
numa_miss 1678204346 418284176
numa_foreign 418284176 1678204370
interleave_hit 27578 27720
local_node 4988131216 3696305260
other_node 1677698308 417946847
Note how ridiculously uneven node0 and node1 are in comparison to what
we're seeing in stage. I'm willing to bet something is just plain wrong
with our current production node. So I'm working with our NOC team to
schedule a failover to the alternate node. If that resolves it, I'll see
if I can't get some kind of answer from our infrastructure guys to share
in case someone else encounters this.
Yes, even if that answer is "reboot." :)
Thanks again!
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-444-8534
sthomas@xxxxxxxxxxxxxxxx
______________________________________________
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance