We're trying to move into our new server setup. We have one that seems to be fine under a load but when we bring the next we're having trouble with it hanging. The second does have more clients (and different) so there could be something about what a client is doing. Here is the server: 389-Directory/1.3.10.1 B2020.133.1625 Installed from EPEL, running on CentOS Linux release 7.8.2003 And here is the pstack output listing the only thread that is not idle. Can anyone tell me what is going on? Thread 44 (Thread 0x7f858e9b3700 (LWP 2515)): #0 0x00007f860a90fe02 in slapi_atomic_load_32 () at /usr/lib64/dirsrv/libslapd.so.0 #1 0x00007f860a8d4e8e in slapi_get_mapping_tree_node_by_dn () at /usr/lib64/dirsrv/libslapd.so.0 #2 0x00007f860a8d5179 in slapi_be_select () at /usr/lib64/dirsrv/libslapd.so.0 #3 0x00007f860a9296a0 in vattr_test_filter () at /usr/lib64/dirsrv/libslapd.so.0 #4 0x00007f860a8b6ec4 in slapi_vattr_filter_test_ext_internal () at /usr/lib64/dirsrv/libslapd.so.0 #5 0x00007f860a8b7ba6 in slapi_vattr_filter_test_ext () at /usr/lib64/dirsrv/libslapd.so.0 #6 0x00007f8600a99e02 in acl__resource_match_aci () at /usr/lib64/dirsrv/plugins/libacl-plugin.so #7 0x00007f8600a9b280 in acl_access_allowed () at /usr/lib64/dirsrv/plugins/libacl-plugin.so #8 0x00007f8600aae9f7 in acl_access_allowed_main () at /usr/lib64/dirsrv/plugins/libacl-plugin.so #9 0x00007f860a8f0cbc in plugin_call_acl_plugin () at /usr/lib64/dirsrv/libslapd.so.0 #10 0x00007f860a8b638d in test_filter_access () at /usr/lib64/dirsrv/libslapd.so.0 #11 0x00007f860a8b6fb5 in slapi_vattr_filter_test_ext_internal () at /usr/lib64/dirsrv/libslapd.so.0 #12 0x00007f860a8b6d31 in slapi_vattr_filter_test_ext_internal () at /usr/lib64/dirsrv/libslapd.so.0 #13 0x00007f860a8b7ba6 in slapi_vattr_filter_test_ext () at /usr/lib64/dirsrv/libslapd.so.0 #14 0x00007f85ff7c0df1 in ldbm_back_next_search_entry_ext () at /usr/lib64/dirsrv/plugins/libback-ldbm.so #15 0x00007f860a8deca6 in send_results_ext.constprop.5 () at /usr/lib64/dirsrv/libslapd.so.0 #16 0x00007f860a8e0e09 in op_shared_search () at /usr/lib64/dirsrv/libslapd.so.0 #17 0x0000557410dd3c0e in do_search () #18 0x0000557410dc198a in connection_threadmain () #19 0x00007f86086a0c5b in _pt_root () at /lib64/libnspr4.so #20 0x00007f8608040ea5 in start_thread () at /lib64/libpthread.so.0 #21 0x00007f86076ec8dd in clone () at /lib64/libc.so.6 Deborah Crocker, PhD Systems Engineer III Office of Information Technology The University of Alabama Box 870346 Tuscaloosa, AL 36587 Office 205-348-3758 | Fax 205-348-9393 deborah.crocker@xxxxxx -----Original Message----- From: William Brown <wbrown@xxxxxxx> Sent: Wednesday, May 27, 2020 5:43 PM To: 389-users@xxxxxxxxxxxxxxxxxxxxxxx Subject: [EXTERNAL] [389-users] Re: Re: Re: Advice to bring new servers into production > On 27 May 2020, at 23:20, Crocker, Deborah <crock@xxxxxx> wrote: > > Thanks - I think we have enough ideas in here to get this going. One last question: > If replication is set up through the host name - how often does the directory server do a DNS look up, or does it do it once on startup (or creation of the rep agreement)? I "think" it's every time it initiates the new connection - but remember, for replication, that *is* quite different to a client doing a search, so I'd be pretty careful about this. IMO you should be standing up your replacement servers in parallel, joining them all, moving the IP's then decomission the old servers. Alternately, you'll need an outage window to shutdown your old servers, export the ldif, and then import and bring up the new ones. I think having "IP's are a limited resource" really does make this whole process much much harder than it needs to be for you ... :( > > -----Original Message----- > From: William Brown <wbrown@xxxxxxx> > Sent: Tuesday, May 26, 2020 10:48 PM > To: 389-users@xxxxxxxxxxxxxxxxxxxxxxx > Subject: [EXTERNAL] [389-users] Re: Re: Advice to bring new servers > into production > > There are a few options. The best would be a load balancer which has the ip's so that it's transparent to your LDAP servers where they are. > > But also as mentioned, the virtual IP's honestly is the best way. Linux can have multiple IP's on an interface so you can just have two IP's on one interface, andthat's the best way to do this. > > Alternately, don't rely on the IP, lower your DNS ttl's to a very short time, change the DNS A/AAAA records, and then do it that way. > > > >> On 27 May 2020, at 06:17, Crocker, Deborah <crock@xxxxxx> wrote: >> >> I’d like not to take up two ip addresses per host indefinitely. We have re-IP’d our hosts before so I know we can to do this but it was during a downtime when everything was restarted. Just trying to get away with not restarting the masters. >> >> Deborah Crocker, PhD >> Systems Engineer III >> Office of Information Technology >> The University of Alabama >> Box 870346 >> Tuscaloosa, AL 36587 >> Office 205-348-3758 | Fax 205-348-9393 deborah.crocker@xxxxxx >> >> From: Leo Pleiman <lpleiman@xxxxxxxxxxxxx> >> Sent: Tuesday, May 26, 2020 3:08 PM >> To: General discussion list for the 389 Directory server project. >> <389-users@xxxxxxxxxxxxxxxxxxxxxxx> >> Subject: [EXTERNAL] [389-users] Re: Advice to bring new servers into >> production >> >> My experience has been that the replicas and consumers have a unique id, more than just an IP address which creates the trust relationship with the master. If your goal is to simply maintain an IP so your clients don't have to be repointed, I would build each new LDAP host and replication agreement, and then as you decommission the old hosts use their IP address as a virtual IP address on the replacement host. It would take a quick restart od the LDAP service to start a listener on the virtual Ip address. >> >> >> Leo Pleiman >> Senior System Engineer >> Direct 202-787-3622 >> Cell 410-688-3873 >> >> >> >> On Tue, May 26, 2020 at 3:57 PM Crocker, Deborah <crock@xxxxxx> wrote: >> We have a setup with 2 multi-masters and 3 consumers. We are now building new host and want to put them in place ultimately at the same IP address as the original ones. I need some advice on how to do this quickly and cleanly. >> >> To add a new consumer the idea now is to set it up and set up replications agreements from each master using consumer DNS name (don't start continuous replication yet). After initializing new consumer from one master - turn off old consumer, remove old consumer agreement from each master, and re-IP new consumer. Do we need to restart masters to re-read DNS or will it pick that up when it starts the next replication? Is this the best way to do this? >> >> Thanks >> >> Deborah Crocker, PhD >> Systems Engineer III >> Office of Information Technology >> The University of Alabama >> Box 870346 >> Tuscaloosa, AL 36587 >> Office 205-348-3758 | Fax 205-348-9393 deborah.crocker@xxxxxx >> >> _______________________________________________ >> 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To >> unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx >> Fedora Code of Conduct: >> https://docs.fedoraproject.org/en-US/project/code-of-conduct/ >> List Guidelines: >> https://fedoraproject.org/wiki/Mailing_list_guidelines >> List Archives: >> https://lists.fedoraproject.org/archives/list/389-users@lists.fedorap >> r oject.org _______________________________________________ >> 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To >> unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx >> Fedora Code of Conduct: >> https://docs.fedoraproject.org/en-US/project/code-of-conduct/ >> List Guidelines: >> https://fedoraproject.org/wiki/Mailing_list_guidelines >> List Archives: >> https://lists.fedoraproject.org/archives/list/389-users@lists.fedorap >> r >> oject.org > > — > Sincerely, > > William Brown > > Senior Software Engineer, 389 Directory Server SUSE Labs > _______________________________________________ > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To > unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: > https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: > https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/389-users@lists.fedorapr > oject.org _______________________________________________ > 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To > unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: > https://docs.fedoraproject.org/en-US/project/code-of-conduct/ > List Guidelines: > https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/389-users@lists.fedorapr > oject.org — Sincerely, William Brown Senior Software Engineer, 389 Directory Server SUSE Labs _______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx _______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx