Re: Bug# 618321 modclusterd memory leak

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/29/2011 07:24 AM, Bill G. wrote:
> Hi Lon, and Ryan,
> 
> If you can get back to me within the next 1:35:00 or so of anything you
> would like me to run, as I am scheduled to kill these processes at
> around midnight PST.

Unlikely to happen.. both Lon and Ryan are on the East Coast and
probably asleep. We will need to wait the next run.

>  Allowing remote access is not an option, these
> servers don't even have internet access.

Understood. I am sure Ryan will come back to you with what's needed.

> 
> uname -a
> Linux iss1a 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 EDT 2010
> x86_64 x86_64 x86_64 GNU/Linux

Did you also file a ticket with GSS/RH support?

> 
> RPMs installed
> cluster-snmp-0.16.2-10.el6.x86_64
> modcluster-0.16.2-10.el6.x86_64
> clusterlib-3.0.12-23.el6_0.4.x86_64
> cluster-cim-0.16.2-10.el6.x86_64
> cluster-glue-libs-1.0.5-2.el6.x86_64
> cluster-glue-1.0.5-2.el6.x86_64
> luci-0.22.2-14.el6_0.1.x86_64
> ricci-0.16.2-13.el6.x86_64
> 
> On Wed, Sep 28, 2011 at 8:47 PM, Fabio M. Di Nitto <fdinitto@xxxxxxxxxx
> <mailto:fdinitto@xxxxxxxxxx>> wrote:
> 
>     On 09/28/2011 11:13 PM, Bill G. wrote:
>     > Hi List,
>     >
>     > I was wondering if you were aware of this bug, and if any of you have
>     > had success in with the suggested work around that is listed as the
>     > final comment.
>     >
>     > Currently this is happening on 5 of my 9 server cluster, one
>     > was using 35GB of ram.
>     >
>     > I was also wondering, of those who have seen the problem, do you have
>     > any other workable band-aids, besides the kill -9 and the prio
>     program?
>     >
> 
>     We actually have some serious issues reproducing this bug in order to
>     fix it and we are looking into any help / data that could help us to
>     kill it.
> 
>     we don't have a workaround but it would be great if you and Ryan (in CC)
>     could maybe find a way to share info/data, or even a temporary ssh
>     access to the cluster to diagnose what is happening would be wonderful.
> 
>     Fabio
> 
> 
>     --
>     Linux-cluster mailing list
>     Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
>     https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> 
> 
> -- 
> Thanks,
> Bill G.
> tc3driver@xxxxxxxxx <mailto:tc3driver@xxxxxxxxx>
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux