If you can get back to me within the next 1:35:00 or so of anything you would like me to run, as I am scheduled to kill these processes at around midnight PST. Allowing remote access is not an option, these servers don't even have internet access.
uname -a
Linux iss1a 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
RPMs installed
--
Thanks,
Bill G.
tc3driver@xxxxxxxxx
uname -a
Linux iss1a 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
RPMs installed
cluster-snmp-0.16.2-10.el6.x86_64
modcluster-0.16.2-10.el6.x86_64
clusterlib-3.0.12-23.el6_0.4.x86_64
cluster-cim-0.16.2-10.el6.x86_64
cluster-glue-libs-1.0.5-2.el6.x86_64
cluster-glue-1.0.5-2.el6.x86_64
luci-0.22.2-14.el6_0.1.x86_64
ricci-0.16.2-13.el6.x86_64
On Wed, Sep 28, 2011 at 8:47 PM, Fabio M. Di Nitto <fdinitto@xxxxxxxxxx> wrote:
On 09/28/2011 11:13 PM, Bill G. wrote:We actually have some serious issues reproducing this bug in order to
> Hi List,
>
> I was wondering if you were aware of this bug, and if any of you have
> had success in with the suggested work around that is listed as the
> final comment.
>
> Currently this is happening on 5 of my 9 server cluster, one
> was using 35GB of ram.
>
> I was also wondering, of those who have seen the problem, do you have
> any other workable band-aids, besides the kill -9 and the prio program?
>
fix it and we are looking into any help / data that could help us to
kill it.
we don't have a workaround but it would be great if you and Ryan (in CC)
could maybe find a way to share info/data, or even a temporary ssh
access to the cluster to diagnose what is happening would be wonderful.
Fabio
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
Thanks,
Bill G.
tc3driver@xxxxxxxxx
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster