Re: Lots of abandoned connections from sssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



take a look at bugzilla ticket
https://bugzilla.redhat.com/show_bug.cgi?id=1156577
by the way I was incorrect about it being POODLE update related It
turns out there has been a change to the SASL GSSAPI module that has
caused some chaos.


On Mon, Nov 10, 2014 at 8:16 PM, Paul Robert Marino <prmarino1@xxxxxxxxx> wrote:
> No that's not it.
>
> If RHEVM (manager) is using 389 server in In "RHDS" mode for authentication
> for its web portal that's where the issue pops up.
> When I get back to the office in the morning I'll spend a link to a bugzilla
> ticket about it which on ovirt 3.5 which I discover earlier tonight also
> applies to RHEV (ovirt) 3.3 and 3.4
>
>
>
> -- Sent from my HP Pre3
>
> ________________________________
> On Nov 10, 2014 7:58 PM, Rich Megginson <rmeggins@xxxxxxxxxx> wrote:
>
> On 11/10/2014 05:44 PM, Paul Robert Marino wrote:
>
> When did this start?
> The reason I ask is I've noticed a lot of problems with RHEV since the
> recent updates to nss and openssl to deal with the POODLE vulnerability.
>
>
> Like what?
>
>
> The workaround for a loot of them is to ensure minssf is set to a value
> higher than 0.
> I'm wondering if this might be something similar. In the past I had never
> set that option because my LDAP database contained no password and Kerberos
> was its own database so the risk was nominal. Now I find at least for RHEV
> (ovirt) I'm suddenly forced to set it.
>
>
> So if you run 389 in RHEV, you have to set minssf, and if you run the same
> version of 389 on bare metal, you don't have to set minssf?
> If this is not an accurate description of your problem, can you please
> elaborate?
>
>
>
>
> -- Sent from my HP Pre3
>
> ________________________________
> On Nov 10, 2014 3:58 PM, Orion Poplawski <orion@xxxxxxxxxxxxx> wrote:
>
> On 11/10/2014 12:07 PM, Rich Megginson wrote:
>> On 11/10/2014 11:59 AM, Orion Poplawski wrote:
>>> On 11/06/2014 10:35 AM, Orion Poplawski wrote:
>>>> On 11/06/2014 03:14 AM, Rich Megginson wrote:
>>>>> Try to reproduce the problem while using gdb to capture stack traces
>>>>> every
>>>>> few
>>>>> seconds as in
>>>>> http://www.port389.org/docs/389ds/FAQ/faq.html#debugging-hangs
>>>>> Ideally, we can get some stack traces of the server during the time
>>>>> between
>>>>> the BIND and the ABANDON
>>>>
>>>> Thanks, I'll give it a shot. The gdb command line is a little incorrect
>>>> though, I think you want:
>>>>
>>>> gdb -ex 'set confirm off' -ex 'set pagination off' -ex 'thread apply all
>>>> bt
>>>> full' -ex 'quit' /usr/sbin/ns-slapd `pidof ns-slapd` > stacktrace.`date
>>>> +%s`.txt 2>&1
>>>>
>>>> - added % in date format, drop trailing ``
>>> gdb ended up aborting while trying to do the stack trace when the problem
>>> occurred (https://bugzilla.redhat.com/show_bug.cgi?id=1162264) so I
>>> haven't
>>> had any luck there.
>>
>> What platform are you using? Can you provide an example of the gdb output?
>>
>
> Scientific Linux 6.5
> 389-ds-base-1.2.11.32-1.el6.x86_64
>
> gdb output is in the bug report, but basically:
> ../../gdb/linux-nat.c:1411: internal-error: linux_nat_post_attach_wait:
> Assertion `pid == new_pid' failed.
>
>
> Hmm - never seen this before.
>
>
>>>
>>> It seems to be a problem with one of my servers only. I've shut it down
>>> and
>>> the user can authenticate fine against our backup server. I tried
>>> restoring
>>> from backup with bak2db but that didn't appear to help. Is there a more
>>> restore from scratch procedure I should try next to see if it some kind
>>> of
>>> corruption?
>>
>> I don't know. I'm not sure how db corruption could be causing this issue.
>> The best way to restore is to completely rebuild the database e.g. db2ldif
>> then ldif2db - then reinit all of your replicas.
>
> So the "reinit all of your replicase" part sounds scary to me. Any
> documentation for this process?
>
>
> Why is it scary?  It's just the regular replica initialization process.
> There's no trick, nothing fancy, no extra documentation.  The thing to
> realize is that a replica reinit does a database reinit, from scratch.
>
>
>
> --
> Orion Poplawski
> Technical Manager 303-415-9701 x222
> NWRA, Boulder/CoRA Office FAX: 303-415-9702
> 3380 Mitchell Lane orion@xxxxxxxx
> Boulder, CO 80301 http://www.nwra.com
> --
> 389 users mailing list
> 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> https://admin.fedoraproject.org/mailman/listinfo/389-users
>
> --
> 389 users mailing list
> 389-users@xxxxxxxxxxxxxxxxxxxxxxx
> https://admin.fedoraproject.org/mailman/listinfo/389-users
>
>
--
389 users mailing list
389-users@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/389-users





[Index of Archives]     [Fedora User Discussion]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora News]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora QA]     [Fedora Triage]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Yosemite Photos]     [Linux Apps]     [Maemo Users]     [Gnome Users]     [KDE Users]     [Fedora Tools]     [Fedora Art]     [Fedora Docs]     [Maemo Users]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Fedora ARM]

  Powered by Linux