Build failed in Jenkins: 389-DS-NIGHTLY #113

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



See <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/113/>

------------------------------------------
[...truncated 7309 lines...]
nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/parent_base
nsslapd-dncachememsize: 10485760
nsslapd-readonly: off
nsslapd-require-index: off
nsslapd-suffix: o=test_parent
objectClass: top
objectClass: extensibleObject
objectClass: nsBackendInstance


INFO:lib389:Entry dn: cn="o=test_parent",cn=mapping tree,cn=config
cn: o=test_parent
nsslapd-backend: parent_base
nsslapd-state: backend
objectclass: top
objectclass: extensibleObject
objectclass: nsMappingTree


INFO:lib389:Found entry dn: cn=o\3Dtest_parent,cn=mapping tree,cn=config
cn: o=test_parent
nsslapd-backend: parent_base
nsslapd-state: backend
objectClass: top
objectClass: extensibleObject
objectClass: nsMappingTree


INFO:suites.paged_results.paged_results_test:Adding suffix:ou=child,o=test_parent and backend: child_base
INFO:lib389:List backend with suffix=ou=child,o=test_parent
INFO:lib389:Creating a local backend
INFO:lib389:List backend cn=child_base,cn=ldbm database,cn=plugins,cn=config
INFO:lib389:Found entry dn: cn=child_base,cn=ldbm database,cn=plugins,cn=config
cn: child_base
nsslapd-cachememsize: 10485760
nsslapd-cachesize: -1
nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/child_base
nsslapd-dncachememsize: 10485760
nsslapd-readonly: off
nsslapd-require-index: off
nsslapd-suffix: ou=child,o=test_parent
objectClass: top
objectClass: extensibleObject
objectClass: nsBackendInstance


INFO:lib389:Entry dn: cn="ou=child,o=test_parent",cn=mapping tree,cn=config
cn: ou=child,o=test_parent
nsslapd-backend: child_base
nsslapd-parent-suffix: o=test_parent
nsslapd-state: backend
objectclass: top
objectclass: extensibleObject
objectclass: nsMappingTree


INFO:lib389:Found entry dn: cn=ou\3Dchild\2Co\3Dtest_parent,cn=mapping tree,cn=config
cn: ou=child,o=test_parent
nsslapd-backend: child_base
nsslapd-parent-suffix: o=test_parent
nsslapd-state: backend
objectClass: top
objectClass: extensibleObject
objectClass: nsMappingTree


INFO:suites.paged_results.paged_results_test:Adding ACI to allow our test user to search
----------------------------- Captured stderr call -----------------------------
INFO:suites.paged_results.paged_results_test:Clear the access log
INFO:suites.paged_results.paged_results_test:Adding 10 users
INFO:suites.paged_results.paged_results_test:Adding 10 users
INFO:suites.paged_results.paged_results_test:Set DM bind
INFO:suites.paged_results.paged_results_test:Running simple paged result search with - search suffix: o=test_parent; filter: (uid=test*); attr list ['\''dn'\'', '\''sn'\'']; page_size = 4; controls: [<ldap.controls.libldap.SimplePagedResultsControl instance at 0x7f6f4af4aef0>].
INFO:suites.paged_results.paged_results_test:Getting page 0
INFO:suites.paged_results.paged_results_test:Getting page 1
INFO:suites.paged_results.paged_results_test:Getting page 2
INFO:suites.paged_results.paged_results_test:Getting page 3
INFO:suites.paged_results.paged_results_test:Getting page 4
INFO:suites.paged_results.paged_results_test:Getting page 5
INFO:suites.paged_results.paged_results_test:20 results
INFO:suites.paged_results.paged_results_test:Restart the server to flush the logs
INFO:suites.paged_results.paged_results_test:Assert that last pr_cookie == -1 and others pr_cookie == 0
INFO:suites.paged_results.paged_results_test:Remove added users
INFO:suites.paged_results.paged_results_test:Deleting 10 users
INFO:suites.paged_results.paged_results_test:Deleting 10 users
________________________ test_cleanallruv_stress_clean _________________________

topology = <suites.replication.cleanallruv_test.TopologyReplication object at 0x7f6f4bf19190>

    def test_cleanallruv_stress_clean(topology):
        '\'''\'''\''
        Put each server(m1 - m4) under stress, and perform the entire clean process
        '\'''\'''\''
        log.info('\''Running test_cleanallruv_stress_clean...'\'')
        log.info('\''test_cleanallruv_stress_clean: put all the masters under load...'\'')
    
        # Put all the masters under load
        m1_add_users = AddUsers(topology.master1, 2000)
        m1_add_users.start()
        m2_add_users = AddUsers(topology.master2, 2000)
        m2_add_users.start()
        m3_add_users = AddUsers(topology.master3, 2000)
        m3_add_users.start()
        m4_add_users = AddUsers(topology.master4, 2000)
        m4_add_users.start()
    
        # Allow sometime to get replication flowing in all directions
        log.info('\''test_cleanallruv_stress_clean: allow some time for replication to get flowing...'\'')
        time.sleep(5)
    
        # Put master 4 into read only mode
        log.info('\''test_cleanallruv_stress_clean: put master 4 into read-only mode...'\'')
        try:
            topology.master4.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, '\''nsslapd-readonly'\'', '\''on'\'')])
        except ldap.LDAPError as e:
            log.fatal('\''test_cleanallruv_stress_clean: Failed to put master 4 into read-only mode: error '\'' +
                      e.message['\''desc'\''])
            assert False
    
        # We need to wait for master 4 to push its changes out
        log.info('\''test_cleanallruv_stress_clean: allow some time for master 4 to push changes out (60 seconds)...'\'')
        time.sleep(60)
    
        # Disable master 4
        log.info('\''test_cleanallruv_stress_clean: disable replication on master 4...'\'')
        try:
            topology.master4.replica.disableReplication(DEFAULT_SUFFIX)
        except:
            log.fatal('\''test_cleanallruv_stress_clean: failed to diable replication'\'')
            assert False
    
        # Remove the agreements from the other masters that point to master 4
        remove_master4_agmts("test_cleanallruv_stress_clean", topology)
    
        # Run the task
        log.info('\''test_cleanallruv_stress_clean: Run the cleanAllRUV task...'\'')
        try:
            topology.master1.tasks.cleanAllRUV(suffix=DEFAULT_SUFFIX, replicaid='\''4'\'',
                                               args={TASK_WAIT: True})
        except ValueError as e:
            log.fatal('\''test_cleanallruv_stress_clean: Problem running cleanAllRuv task: '\'' +
                      e.message('\''desc'\''))
            assert False
    
        # Wait for the update to finish
        log.info('\''test_cleanallruv_stress_clean: wait for all the updates to finish...'\'')
        m1_add_users.join()
        m2_add_users.join()
        m3_add_users.join()
        m4_add_users.join()
    
        # Check the other master'\''s RUV for '\''replica 4'\''
        log.info('\''test_cleanallruv_stress_clean: check if all the replicas have been cleaned...'\'')
        clean = check_ruvs("test_cleanallruv_stress_clean", topology)
        if not clean:
            log.fatal('\''test_cleanallruv_stress_clean: Failed to clean replicas'\'')
            assert False
    
        log.info('\''test_cleanallruv_stress_clean:  PASSED, restoring master 4...'\'')
    
        #
        # Cleanup - restore master 4
        #
    
        # Sleep for a bit to replication complete
        log.info("Sleep for 120 seconds to allow replication to complete...")
        time.sleep(120)
    
        # Turn off readonly mode
        try:
            topology.master4.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, '\''nsslapd-readonly'\'', '\''off'\'')])
        except ldap.LDAPError as e:
            log.fatal('\''test_cleanallruv_stress_clean: Failed to put master 4 into read-only mode: error '\'' +
                      e.message['\''desc'\''])
            assert False
    
>       restore_master4(topology)

<http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:1208: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
<http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:571: in restore_master4
    topology.master2.start(timeout=30)
<http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1096: in start
    "dirsrv@%s" % self.serverid])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

popenargs = (['\''/usr/bin/systemctl'\'', '\''start'\'', '\''dirsrv@master_2'\''],), kwargs = {}
retcode = 1, cmd = ['\''/usr/bin/systemctl'\'', '\''start'\'', '\''dirsrv@master_2'\'']

    def check_call(*popenargs, **kwargs):
        """Run command with arguments.  Wait for command to complete.  If
        the exit code was zero then return, otherwise raise
        CalledProcessError.  The CalledProcessError object will have the
        return code in the returncode attribute.
    
        The arguments are the same as for the Popen constructor.  Example:
    
        check_call(["ls", "-l"])
        """
        retcode = call(*popenargs, **kwargs)
        if retcode:
            cmd = kwargs.get("args")
            if cmd is None:
                cmd = popenargs[0]
>           raise CalledProcessError(retcode, cmd)
E           CalledProcessError: Command '\''['\''/usr/bin/systemctl'\'', '\''start'\'', '\''dirsrv@master_2'\'']'\'' returned non-zero exit status 1

/usr/lib64/python2.7/subprocess.py:541: CalledProcessError
----------------------------- Captured stderr call -----------------------------
INFO:suites.replication.cleanallruv_test:Running test_cleanallruv_stress_clean...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put all the masters under load...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for replication to get flowing...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put master 4 into read-only mode...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for master 4 to push changes out (60 seconds)...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: disable replication on master 4...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: remove all the agreements to master 4...
INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed
INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed
INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Run the cleanAllRUV task...
INFO:lib389:cleanAllRUV task (task-10272016_023156) completed successfully
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: wait for all the updates to finish...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: check if all the replicas have been cleaned...
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 1 is cleaned.
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 2 is cleaned.
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 3 is cleaned.
INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean:  PASSED, restoring master 4...
INFO:suites.replication.cleanallruv_test:Sleep for 120 seconds to allow replication to complete...
INFO:suites.replication.cleanallruv_test:Restoring master 4...
INFO:lib389:List backend with suffix=dc=example,dc=com
WARNING:lib389:entry cn=changelog5,cn=config already exists
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created
Job for dirsrv@master_2.service failed because a fatal signal was delivered causing the control process to dump core. See "systemctl status dirsrv@master_2.service" and "journalctl -xe" for details.
============== 35 failed, 481 passed, 5 error in 8092.80 seconds ==============='
+ '[' 1 -ne 0 ']'
+ echo CI Tests 'FAILED!'
CI Tests FAILED!
+ echo ============================= test session starts ============================== platform linux2 -- Python 2.7.12, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python2 cachedir: .cache rootdir: <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests,> inifile: plugins: sourceorder-0.5, multihost-1.0 collecting ... collected 520 items tickets/ticket1347760_test.py::test_ticket1347760 FAILED tickets/ticket365_test.py::test_ticket365 PASSED tickets/ticket47313_test.py::test_ticket47313_run PASSED tickets/ticket47384_test.py::test_ticket47384 PASSED tickets/ticket47431_test.py::test_ticket47431_0 PASSED tickets/ticket47431_test.py::test_ticket47431_1 FAILED tickets/ticket47431_test.py::test_ticket47431_2 PASSED tickets/ticket47431_test.py::test_ticket47431_3 PASSED tickets/ticket47462_test.py::test_ticket47462 FAILED tickets/ticket47490_test.py::test_ticket47490_init PASSED tickets/ticket47490_test.py::test_ticket47490_one PASSED tickets/ticket47490_test.py::test_ticket47490_two PASSED tickets/ticket47490_test.py::test_ticket47490_three PASSED tickets/ticket47490_test.py::test_ticket47490_four PASSED tickets/ticket47490_test.py::test_ticket47490_five PASSED tickets/ticket47490_test.py::test_ticket47490_six PASSED tickets/ticket47490_test.py::test_ticket47490_seven PASSED tickets/ticket47490_test.py::test_ticket47490_eight PASSED tickets/ticket47490_test.py::test_ticket47490_nine PASSED tickets/ticket47536_test.py::test_ticket47536 FAILED tickets/ticket47553_test.py::test_ticket47553 PASSED tickets/ticket47560_test.py::test_ticket47560 PASSED tickets/ticket47573_test.py::test_ticket47573_init PASSED tickets/ticket47573_test.py::test_ticket47573_one PASSED tickets/ticket47573_test.py::test_ticket47573_two PASSED tickets/ticket47573_test.py::test_ticket47573_three PASSED tickets/ticket47619_test.py::test_ticket47619_init FAILED tickets/ticket47619_test.py::test_ticket47619_create_index PASSED tickets/ticket47619_test.py::test_ticket47619_reindex PASSED tickets
/ticket47619_test.py::test_ticket47619_check_indexed_search PASSED tickets/ticket47640_test.py::test_ticket47640 PASSED tickets/ticket47653MMR_test.py::test_ticket47653_init PASSED tickets/ticket47653MMR_test.py::test_ticket47653_add FAILED tickets/ticket47653MMR_test.py::test_ticket47653_modify FAILED tickets/ticket47653_test.py::test_ticket47653_init PASSED tickets/ticket47653_test.py::test_ticket47653_add PASSED tickets/ticket47653_test.py::test_ticket47653_search PASSED tickets/ticket47653_test.py::test_ticket47653_modify PASSED tickets/ticket47653_test.py::test_ticket47653_delete PASSED tickets/ticket47669_test.py::test_ticket47669_init FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_maxage FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_triminterval FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_compactdbinterval FAILED tickets/ticket47669_test.py::test_ticket47669_retrochangelog_maxage FAILED tickets/ticket47676_test.py::test_ticket47676_init PASSED tickets/ticket47676_test.py::test_ticket47676_skip_oc_at PASSED tickets/ticket47676_test.py::test_ticket47676_reject_action PASSED tickets/ticket47714_test.py::test_ticket47714_init PASSED tickets/ticket47714_test.py::test_ticket47714_run_0 PASSED tickets/ticket47714_test.py::test_ticket47714_run_1 PASSED tickets/ticket47721_test.py::test_ticket47721_init PASSED tickets/ticket47721_test.py::test_ticket47721_0 PASSED tickets/ticket47721_test.py::test_ticket47721_1 PASSED tickets/ticket47721_test.py::test_ticket47721_2 PASSED tickets/ticket47721_test.py::test_ticket47721_3 PASSED tickets/ticket47721_test.py::test_ticket47721_4 PASSED tickets/ticket47781_test.py::test_ticket47781 PASSED tickets/ticket47787_test.py::test_ticket47787_init PASSED tickets/ticket47787_test.py::test_ticket47787_2 PASSED tickets/ticket47808_test.py::test_ticket47808_run PASSED tickets/ticket47815_test.py::test_ticket47815 PASSED tickets/ticket47819_test.py::test_ticket47819 PASSED tickets/ticket47823_test.py::test_ticket47823_init FAILED tick
ets/ticket47823_test.py::test_ticket47823_one_container_add PASSED tickets/ticket47823_test.py::test_ticket47823_one_container_mod PASSED tickets/ticket47823_test.py::test_ticket47823_one_container_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_add PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_mod PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_add PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_mod PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_invalid_config_1 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_2 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_3 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_4 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_5 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_6 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_7 FAILED tickets/ticket47828_test.py::test_ticket47828_init PASSED tickets/ticket47828_test.py::test_ticket47828_run_0 PASSED tickets/ticket47828_test.py::test_ticket47828_run_1 PASSED tickets/ticket47828_test.py::test_ticket47828_run_2 PASSED tickets/ticket47828_test.py::test_ticket47828_run_3 PASSED tickets/ticket47828_test.py::test_ticket47828_run_4 PASSED tickets/ticket47828_test.py::test_ticket47828_run_5 PASSED tickets/ticket47828_test.py::test_ticket47828_run_6 PASSED tickets/ticket47828_test.py::test_ticket47828_run_7 PASSED tickets/ticket47828_test.py::test_ticket47828_run_8 PASSED tickets/ticket47828_test.py::test_ticket47828_run_9 PASSED tickets/ticket47828_test.py::test_ticket47828_run_10 PASSED tickets/ticket47828_test.py::test_ticket47828_run_11 PASSED tickets/ticket47828_test.py::test_ticket47828_run_12 PASSED tickets/ticket47828_test.py::test_ticket47828_r
un_13 PASSED tickets/ticket47828_test.py::test_ticket47828_run_14 PASSED tickets/ticket47828_test.py::test_ticket47828_run_15 PASSED tickets/ticket47828_test.py::test_ticket47828_run_16 PASSED tickets/ticket47828_test.py::test_ticket47828_run_17 PASSED tickets/ticket47828_test.py::test_ticket47828_run_18 PASSED tickets/ticket47828_test.py::test_ticket47828_run_19 PASSED tickets/ticket47828_test.py::test_ticket47828_run_20 PASSED tickets/ticket47828_test.py::test_ticket47828_run_21 PASSED tickets/ticket47828_test.py::test_ticket47828_run_22 PASSED tickets/ticket47828_test.py::test_ticket47828_run_23 PASSED tickets/ticket47828_test.py::test_ticket47828_run_24 PASSED tickets/ticket47828_test.py::test_ticket47828_run_25 PASSED tickets/ticket47828_test.py::test_ticket47828_run_26 PASSED tickets/ticket47828_test.py::test_ticket47828_run_27 PASSED tickets/ticket47828_test.py::test_ticket47828_run_28 PASSED tickets/ticket47828_test.py::test_ticket47828_run_29 PASSED tickets/ticket47828_test.py::test_ticket47828_run_30 PASSED tickets/ticket47828_test.py::test_ticket47828_run_31 PASSED tickets/ticket47829_test.py::test_ticket47829_init PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_out_user_1 PASSED tick
ets/ticket47829_test.py::test_ticket47829_mod_modrdn_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_modrdn_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_modrdn_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_1 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_2 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_3 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_4 PASSED tickets/ticket47833_test.py::test_ticket47829_init PASSED tickets/ticket47833_test.py::test_ticket47829_mod_stage_user_modrdn_stage_user_1 PASSED tickets/ticket47869MMR_test.py::test_ticket47869_init PASSED tickets/ticket47869MMR_test.py::test_ticket47869_check PASSED tickets/ticket47871_test.py::test_ticket47871_init FAILED tickets/ticket47871_test.py::test_ticket47871_1 PASSED tickets/ticket47871_test.py::test_ticket47871_2 PASSED tickets/ticket47900_test.py::test_ticket47900 PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_positive PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_negative PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_invalid PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_noaccesslogs PASSED tickets/ticket47920_test.py::test_ticket47920_init PASSED tickets/ticket47920_test.py::test_ticket47920_mod_readentry_ctrl PASSED tickets/ticket47921_test.py::test_ticket47921 PASSED tickets/ticket47927_test.py::test_ticket47927_init PASSED tickets/ticket47927_test.py::test_ticket47927_one PASSED tickets/ticket47927_test.py::test_ticket47927_two PASSED tickets/ticket47927_test.py::test_ticket47927_three PASSED tickets/ticket47927_test.py::test_ticket47927_four PASSED tickets/ticket47927_test.py::test_ticket47927_five PASSED tickets/ticket47927_test.py::test_ticket47927_six PASSED tickets/ticket47931_test.py::test_ticket47931 PASSED tickets/ticket47937_test.py::test_ticket47937 PASSED ticke
ts/ticket47950_test.py::test_ticket47950 PASSED tickets/ticket47953_test.py::test_ticket47953 PASSED tickets/ticket47963_test.py::test_ticket47963 PASSED tickets/ticket47966_test.py::test_ticket47966 PASSED tickets/ticket47970_test.py::test_ticket47970 PASSED tickets/ticket47973_test.py::test_ticket47973 PASSED tickets/ticket47976_test.py::test_ticket47976_init PASSED tickets/ticket47976_test.py::test_ticket47976_1 PASSED tickets/ticket47976_test.py::test_ticket47976_2 PASSED tickets/ticket47976_test.py::test_ticket47976_3 PASSED tickets/ticket47980_test.py::test_ticket47980 PASSED tickets/ticket47981_test.py::test_ticket47981 PASSED tickets/ticket47988_test.py::test_ticket47988_init PASSED tickets/ticket47988_test.py::test_ticket47988_1 PASSED tickets/ticket47988_test.py::test_ticket47988_2 PASSED tickets/ticket47988_test.py::test_ticket47988_3 PASSED tickets/ticket47988_test.py::test_ticket47988_4 PASSED tickets/ticket47988_test.py::test_ticket47988_5 PASSED tickets/ticket47988_test.py::test_ticket47988_6 PASSED tickets/ticket48005_test.py::test_ticket48005_setup PASSED tickets/ticket48005_test.py::test_ticket48005_memberof PASSED tickets/ticket48005_test.py::test_ticket48005_automember PASSED tickets/ticket48005_test.py::test_ticket48005_syntaxvalidate PASSED tickets/ticket48005_test.py::test_ticket48005_usn PASSED tickets/ticket48005_test.py::test_ticket48005_schemareload PASSED tickets/ticket48013_test.py::test_ticket48013 PASSED tickets/ticket48026_test.py::test_ticket48026 PASSED tickets/ticket48109_test.py::test_ticket48109 FAILED tickets/ticket48170_test.py::test_ticket48170 PASSED tickets/ticket48194_test.py::test_init PASSED tickets/ticket48194_test.py::test_run_0 PASSED tickets/ticket48194_test.py::test_run_1 PASSED tickets/ticket48194_test.py::test_run_2 PASSED tickets/ticket48194_test.py::test_run_3 PASSED tickets/ticket48194_test.py::test_run_4 PASSED tickets/ticket48194_test.py::test_run_5 PASSED tickets/ticket48194_test.py::test_run_6 PASSED tickets/ticket48194_test.py::test_run_7 PASSED tickets/
ticket48194_test.py::test_run_8 PASSED tickets/ticket48194_test.py::test_run_9 PASSED tickets/ticket48194_test.py::test_run_10 PASSED tickets/ticket48194_test.py::test_run_11 PASSED tickets/ticket48212_test.py::test_ticket48212 PASSED tickets/ticket48214_test.py::test_ticket48214_run PASSED tickets/ticket48226_test.py::test_ticket48226_set_purgedelay PASSED tickets/ticket48226_test.py::test_ticket48226_1 PASSED tickets/ticket48228_test.py::test_ticket48228_test_global_policy PASSED tickets/ticket48228_test.py::test_ticket48228_test_subtree_policy PASSED tickets/ticket48233_test.py::test_ticket48233 PASSED tickets/ticket48234_test.py::test_ticket48234 PASSED tickets/ticket48252_test.py::test_ticket48252_setup PASSED tickets/ticket48252_test.py::test_ticket48252_run_0 PASSED tickets/ticket48252_test.py::test_ticket48252_run_1 PASSED tickets/ticket48265_test.py::test_ticket48265_test PASSED tickets/ticket48266_test.py::test_ticket48266_fractional PASSED tickets/ticket48266_test.py::test_ticket48266_check_repl_desc PASSED tickets/ticket48266_test.py::test_ticket48266_count_csn_evaluation FAILED tickets/ticket48270_test.py::test_ticket48270_init PASSED tickets/ticket48270_test.py::test_ticket48270_homeDirectory_indexed_cis FAILED tickets/ticket48270_test.py::test_ticket48270_homeDirectory_mixed_value PASSED tickets/ticket48270_test.py::test_ticket48270_extensible_search PASSED tickets/ticket48272_test.py::test_ticket48272 PASSED tickets/ticket48294_test.py::test_48294_init PASSED tickets/ticket48294_test.py::test_48294_run_0 PASSED tickets/ticket48294_test.py::test_48294_run_1 PASSED tickets/ticket48294_test.py::test_48294_run_2 PASSED tickets/ticket48295_test.py::test_48295_init PASSED tickets/ticket48295_test.py::test_48295_run PASSED tickets/ticket48312_test.py::test_ticket48312 PASSED tickets/ticket48325_test.py::test_ticket48325 PASSED tickets/ticket48342_test.py::test_ticket4026 ERROR tickets/ticket48354_test.py::test_ticket48354 PASSED tickets/ticket48362_test.py::test_ticket48362 PASSED tickets/ticket48366_tes
t.py::test_ticket48366_init PASSED tickets/ticket48366_test.py::test_ticket48366_search_user PASSED tickets/ticket48366_test.py::test_ticket48366_search_dm PASSED tickets/ticket48370_test.py::test_ticket48370 PASSED tickets/ticket48383_test.py::test_ticket48383 FAILED tickets/ticket48497_test.py::test_ticket48497_init PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_mixed_value PASSED tickets/ticket48497_test.py::test_ticket48497_extensible_search PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_index_cfg PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_index_run FAILED tickets/ticket48637_test.py::test_ticket48637 PASSED tickets/ticket48665_test.py::test_ticket48665 PASSED tickets/ticket48745_test.py::test_ticket48745_init PASSED tickets/ticket48745_test.py::test_ticket48745_homeDirectory_indexed_cis FAILED tickets/ticket48745_test.py::test_ticket48745_homeDirectory_mixed_value PASSED tickets/ticket48745_test.py::test_ticket48745_extensible_search_after_index PASSED tickets/ticket48746_test.py::test_ticket48746_init PASSED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_indexed_cis FAILED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_mixed_value PASSED tickets/ticket48746_test.py::test_ticket48746_extensible_search_after_index PASSED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_indexed_ces FAILED tickets/ticket48755_test.py::test_ticket48755 PASSED tickets/ticket48759_test.py::test_ticket48759 PASSED tickets/ticket48784_test.py::test_ticket48784 PASSED tickets/ticket48798_test.py::test_ticket48798 PASSED tickets/ticket48799_test.py::test_ticket48799 PASSED tickets/ticket48808_test.py::test_ticket48808 PASSED tickets/ticket48844_test.py::test_ticket48844_init PASSED tickets/ticket48844_test.py::test_ticket48844_bitwise_on PASSED tickets/ticket48844_test.py::test_ticket48844_bitwise_off PASSED tickets/ticket48891_test.py::test_ticket48891_setup PASSED tickets/ticket48893_test.py::test_ticket48893 PASSED tickets/ticket488
96_test.py::test_ticket48896 PASSED tickets/ticket48906_test.py::test_ticket48906_setup PASSED tickets/ticket48906_test.py::test_ticket48906_dblock_default PASSED tickets/ticket48906_test.py::test_ticket48906_dblock_ldap_update FAILED tickets/ticket48906_test.py::test_ticket48906_dblock_edit_update FAILED tickets/ticket48906_test.py::test_ticket48906_dblock_robust FAILED tickets/ticket48916_test.py::test_ticket48916 PASSED tickets/ticket48956_test.py::test_ticket48956 PASSED tickets/ticket548_test.py::test_ticket548_test_with_no_policy PASSED tickets/ticket548_test.py::test_ticket548_test_global_policy PASSED tickets/ticket548_test.py::test_ticket548_test_subtree_policy PASSED suites/acct_usability_plugin/acct_usability_test.py::test_acct_usability_init PASSED suites/acct_usability_plugin/acct_usability_test.py::test_acct_usability_ PASSED suites/acctpolicy_plugin/acctpolicy_test.py::test_acctpolicy_init PASSED suites/acctpolicy_plugin/acctpolicy_test.py::test_acctpolicy_ PASSED 'suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[lang-ja]' PASSED 'suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[binary]' PASSED 'suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[phonetic]' PASSED suites/acl/acl_test.py::test_mode_default_add_deny PASSED suites/acl/acl_test.py::test_mode_default_delete_deny PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[0-cn=staged' 'user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[1-cn=staged' 'user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[2-cn=staged' 'user,dc=example,dc=com-cn=bad*,dc=example,dc=com-True]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[3-cn=st*,dc=example,dc=com-cn=accounts,dc=example,dc=com-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[4-cn=bad*,dc=example,dc=com-cn=accounts,dc=example,dc=com-True]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[5-cn=st*,dc=example,dc=com-cn=a
c*,dc=example,dc=com-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[6-None-cn=ac*,dc=example,dc=com-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[7-cn=st*,dc=example,dc=com-None-False]' PASSED 'suites/acl/acl_test.py::test_moddn_staging_prod[8-None-None-False]' PASSED suites/acl/acl_test.py::test_moddn_staging_prod_9 PASSED suites/acl/acl_test.py::test_moddn_prod_staging PASSED suites/acl/acl_test.py::test_check_repl_M2_to_M1 PASSED suites/acl/acl_test.py::test_moddn_staging_prod_except PASSED suites/acl/acl_test.py::test_mode_default_ger_no_moddn PASSED suites/acl/acl_test.py::test_mode_default_ger_with_moddn PASSED suites/acl/acl_test.py::test_mode_switch_default_to_legacy PASSED suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn1 PASSED suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn2 PASSED suites/acl/acl_test.py::test_mode_legacy_ger_with_moddn PASSED suites/acl/acl_test.py::test_rdn_write_get_ger PASSED suites/acl/acl_test.py::test_rdn_write_modrdn_anonymous PASSED suites/attr_encryption/attr_encrypt_test.py::test_attr_encrypt_init PASSED suites/attr_encryption/attr_encrypt_test.py::test_attr_encrypt_ PASSED suites/attr_uniqueness_plugin/attr_uniqueness_test.py::test_attr_uniqueness_init PASSED suites/attr_uniqueness_plugin/attr_uniqueness_test.py::test_attr_uniqueness PASSED suites/automember_plugin/automember_test.py::test_automember_init PASSED suites/automember_plugin/automember_test.py::test_automember_ PASSED suites/basic/basic_test.py::test_basic_ops PASSED suites/basic/basic_test.py::test_basic_import_export PASSED suites/basic/basic_test.py::test_basic_backup PASSED suites/basic/basic_test.py::test_basic_acl PASSED suites/basic/basic_test.py::test_basic_searches PASSED suites/basic/basic_test.py::test_basic_referrals PASSED suites/basic/basic_test.py::test_basic_systemctl PASSED suites/basic/basic_test.py::test_basic_ldapagent PASSED suites/basic/basic_test.py::test_basic_dse PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[namingContexts]' PASSE
D 'suites/basic/basic_test.py::test_def_rootdse_attr[supportedLDAPVersion]' PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[supportedControl]' PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[supportedExtension]' PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[supportedSASLMechanisms]' PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[vendorName]' PASSED 'suites/basic/basic_test.py::test_def_rootdse_attr[vendorVersion]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[namingContexts]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedLDAPVersion]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedControl]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedExtension]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedSASLMechanisms]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[vendorName]' PASSED 'suites/basic/basic_test.py::test_mod_def_rootdse_attr[vendorVersion]' PASSED suites/betxns/betxn_test.py::test_betxn_init PASSED suites/betxns/betxn_test.py::test_betxt_7bit PASSED suites/betxns/betxn_test.py::test_betxn_attr_uniqueness PASSED suites/betxns/betxn_test.py::test_betxn_memberof PASSED suites/chaining_plugin/chaining_test.py::test_chaining_init PASSED suites/chaining_plugin/chaining_test.py::test_chaining_ PASSED suites/clu/clu_test.py::test_clu_init PASSED suites/clu/clu_test.py::test_clu_pwdhash PASSED suites/clu/db2ldif_test.py::test_db2ldif_init PASSED suites/collation_plugin/collatation_test.py::test_collatation_init PASSED suites/collation_plugin/collatation_test.py::test_collatation_ PASSED suites/config/config_test.py::test_maxbersize_repl ERROR suites/config/config_test.py::test_config_listen_backport_size ERROR suites/config/config_test.py::test_config_deadlock_policy ERROR suites/cos_plugin/cos_test.py::test_cos_init PASSED suites/cos_plugin/cos_test.py::test_cos_ PASSED suites/deref_plugin/deref_test.py::test_deref_init PASSED suites/deref_plugin/
deref_test.py::test_deref_ PASSED suites/disk_monitoring/disk_monitor_test.py::test_disk_monitor_init PASSED suites/disk_monitoring/disk_monitor_test.py::test_disk_monitor_ PASSED suites/distrib_plugin/distrib_test.py::test_distrib_init PASSED suites/distrib_plugin/distrib_test.py::test_distrib_ PASSED suites/dna_plugin/dna_test.py::test_dna_init PASSED suites/dna_plugin/dna_test.py::test_dna_ PASSED suites/ds_logs/ds_logs_test.py::test_ds_logs_init PASSED suites/ds_logs/ds_logs_test.py::test_ds_logs_ PASSED suites/dynamic-plugins/test_dynamic_plugins.py::test_dynamic_plugins PASSED suites/filter/filter_test.py::test_filter_init PASSED suites/filter/filter_test.py::test_filter_escaped PASSED suites/filter/filter_test.py::test_filter_search_original_attrs PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_supported_features PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-True-oper_attr_list3]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=examp
le,dc=com-True-oper_attr_list3-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-True-oper_attr_list3-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5-objectClass]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6-*]' PASSED 'suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6-objectClass]' PASSED suites/get_effective_rights/ger_test.py::test_ger_init PASSED suites/get_effective_rights/ger_test.py::test_ger_ PASSED suites/gssapi_repl/gssapi_repl_test.py::test_gssapi_repl PASSED suites/ldapi/ldapi_test.py::test_ldapi_init PASSED suites/ldapi/ldapi_test.py::test_ldapi_ PASSED suites/linkedattrs_plugin/linked_attrs_test.py::test_linked_attrs_init PASSED suites/linkedattrs_plugin/linked_attrs_test.py::test_linked_attrs_ PASSED suites/mapping_tree/mapping_tree_test.py::test_mapping_tree_init PASSED suites/mapping_tree/mapping_tree_test.py::test_mapping_tree_ PASSED suites/memberof_plugin/memberof_test.py::test_memberof_auto_add_oc PASSED suites/m
emory_leaks/range_search_test.py::test_range_search_init FAILED suites/memory_leaks/range_search_test.py::test_range_search PASSED suites/memory_leaks/range_search_test.py::test_range_search ERROR suites/monitor/monitor_test.py::test_monitor_init PASSED suites/monitor/monitor_test.py::test_monitor_ PASSED 'suites/paged_results/paged_results_test.py::test_search_success[6-5]' PASSED 'suites/paged_results/paged_results_test.py::test_search_success[5-5]' PASSED 'suites/paged_results/paged_results_test.py::test_search_success[5-25]' PASSED 'suites/paged_results/paged_results_test.py::test_search_limits_fail[50-200-cn=config,cn=ldbm' 'database,cn=plugins,cn=config-nsslapd-idlistscanlimit-100-UNWILLING_TO_PERFORM]' PASSED 'suites/paged_results/paged_results_test.py::test_search_limits_fail[5-15-cn=config-nsslapd-timelimit-20-UNAVAILABLE_CRITICAL_EXTENSION]' PASSED 'suites/paged_results/paged_results_test.py::test_search_limits_fail[21-50-cn=config-nsslapd-sizelimit-20-SIZELIMIT_EXCEEDED]' PASSED 'suites/paged_results/paged_results_test.py::test_search_limits_fail[21-50-cn=config-nsslapd-pagedsizelimit-5-SIZELIMIT_EXCEEDED]' PASSED 'suites/paged_results/paged_results_test.py::test_search_limits_fail[5-50-cn=config,cn=ldbm' 'database,cn=plugins,cn=config-nsslapd-lookthroughlimit-20-ADMINLIMIT_EXCEEDED]' PASSED suites/paged_results/paged_results_test.py::test_search_sort_success PASSED suites/paged_results/paged_results_test.py::test_search_abandon PASSED suites/paged_results/paged_results_test.py::test_search_with_timelimit PASSED 'suites/paged_results/paged_results_test.py::test_search_dns_ip_aci[dns' = '"localhost.localdomain"]' PASSED 'suites/paged_results/paged_results_test.py::test_search_dns_ip_aci[ip' = '"::1"' or ip = '"127.0.0.1"]' PASSED suites/paged_results/paged_results_test.py::test_search_multiple_paging PASSED 'suites/paged_results/paged_results_test.py::test_search_invalid_cookie[1000]' PASSED 'suites/paged_results/paged_results_test.py::test_search_invalid_cookie[-1]' PASSED suites/paged_results/paged_re
sults_test.py::test_search_abandon_with_zero_size PASSED suites/paged_results/paged_results_test.py::test_search_pagedsizelimit_success PASSED 'suites/paged_results/paged_results_test.py::test_search_nspagedsizelimit[5-15-PASS]' PASSED 'suites/paged_results/paged_results_test.py::test_search_nspagedsizelimit[15-5-SIZELIMIT_EXCEEDED]' PASSED 'suites/paged_results/paged_results_test.py::test_search_paged_limits[conf_attr_values0-ADMINLIMIT_EXCEEDED]' PASSED 'suites/paged_results/paged_results_test.py::test_search_paged_limits[conf_attr_values1-PASS]' PASSED 'suites/paged_results/paged_results_test.py::test_search_paged_user_limits[conf_attr_values0-ADMINLIMIT_EXCEEDED]' PASSED 'suites/paged_results/paged_results_test.py::test_search_paged_user_limits[conf_attr_values1-PASS]' PASSED suites/paged_results/paged_results_test.py::test_ger_basic PASSED suites/paged_results/paged_results_test.py::test_multi_suffix_search FAILED 'suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[None]' PASSED 'suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[-1]' PASSED 'suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[1000]' PASSED 'suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_failure[0]' PASSED 'suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_failure[1]' PASSED suites/pam_passthru_plugin/pam_test.py::test_pam_init PASSED suites/pam_passthru_plugin/pam_test.py::test_pam_ PASSED suites/passthru_plugin/passthru_test.py::test_passthru_init PASSED suites/passthru_plugin/passthru_test.py::test_passthru_ PASSED suites/password/password_test.py::test_password_init PASSED suites/password/password_test.py::test_password_delete_specific_password PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin_init PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin_config_validation PASSED 'suites/password/pwdPolicy_attribute_test.py::test_change_pwd[on-of
f-UNWILLING_TO_PERFORM]' PASSED 'suites/password/pwdPolicy_attribute_test.py::test_change_pwd[off-off-UNWILLING_TO_PERFORM]' PASSED 'suites/password/pwdPolicy_attribute_test.py::test_change_pwd[off-on-None]' PASSED 'suites/password/pwdPolicy_attribute_test.py::test_change_pwd[on-on-None]' PASSED suites/password/pwdPolicy_attribute_test.py::test_pwd_min_age PASSED 'suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[off-off]' PASSED 'suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[on-off]' PASSED 'suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[off-on]' PASSED 'suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_restrictions[cn=config]' PASSED 'suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_restrictions[cn="cn=nsPwPolicyEntry,ou=People,dc=example,dc=com",cn=nsPwPolicyContainer,ou=People,dc=example,dc=com]' PASSED suites/password/pwdPolicy_syntax_test.py::test_pwdPolicy_syntax PASSED 'suites/password/pwdPolicy_warning_test.py::test_different_values[' ']' PASSED 'suites/password/pwdPolicy_warning_test.py::test_different_values[junk123]' PASSED 'suites/password/pwdPolicy_warning_test.py::test_different_values[on]' PASSED 'suites/password/pwdPolicy_warning_test.py::test_different_values[off]' PASSED suites/password/pwdPolicy_warning_test.py::test_expiry_time PASSED 'suites/password/pwdPolicy_warning_test.py::test_password_warning[passwordSendExpiringTime-off]' PASSED 'suites/password/pwdPolicy_warning_test.py::test_password_warning[passwordWarning-3600]' PASSED suites/password/pwdPolicy_warning_test.py::test_with_different_password_states PASSED suites/password/pwdPolicy_warning_test.py::test_default_behavior PASSED suites/password/pwdPolicy_warning_test.py::test_with_local_policy PASSED suites/password/pwp_history_test.py::test_pwp_history_test PASSED suites/posix_winsync_plugin/posix_winsync_test.py::test_posix_winsync_init PASSED suites/posix_winsync_plugin/posix_winsync_test.py::test_posix_
winsync_ PASSED suites/psearch/psearch_test.py::test_psearch_init PASSED suites/psearch/psearch_test.py::test_psearch_ PASSED suites/referint_plugin/referint_test.py::test_referint_init PASSED suites/referint_plugin/referint_test.py::test_referint_ PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_init PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean_restart PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean_force PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort_restart PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort_certify PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_stress_clean FAILED suites/replication/wait_for_async_feature_test.py::test_not_int_value PASSED suites/replication/wait_for_async_feature_test.py::test_multi_value PASSED 'suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr0]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr1]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr2]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr3]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr0]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr1]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr2]' PASSED 'suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr3]' PASSED suites/replsync_plugin/repl_sync_test.py::test_repl_sync_init PASSED suites/replsync_plugin/repl_sync_test.py::test_repl_sync_ PASSED suites/resource_limits/res_limits_test.py::test_res_limits_init PASSED suites/resource_limits/
res_limits_test.py::test_res_limits_ PASSED suites/retrocl_plugin/retrocl_test.py::test_retrocl_init PASSED suites/retrocl_plugin/retrocl_test.py::test_retrocl_ PASSED suites/reverpwd_plugin/reverpwd_test.py::test_reverpwd_init PASSED suites/reverpwd_plugin/reverpwd_test.py::test_reverpwd_ PASSED suites/roles_plugin/roles_test.py::test_roles_init PASSED suites/roles_plugin/roles_test.py::test_roles_ PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_init PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_specific_time PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_day_of_week PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_denied_ip PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_denied_host PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_allowed_ip PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_allowed_host PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_config_validate PASSED suites/sasl/sasl_test.py::test_sasl_init PASSED suites/sasl/sasl_test.py::test_sasl_ PASSED suites/schema/test_schema.py::test_schema_comparewithfiles PASSED suites/schema_reload_plugin/schema_reload_test.py::test_schema_reload_init PASSED suites/schema_reload_plugin/schema_reload_test.py::test_schema_reload_ PASSED suites/snmp/snmp_test.py::test_snmp_init PASSED suites/snmp/snmp_test.py::test_snmp_ PASSED suites/ssl/ssl_test.py::test_ssl_init PASSED suites/ssl/ssl_test.py::test_ssl_ PASSED suites/syntax_plugin/syntax_test.py::test_syntax_init PASSED suites/syntax_plugin/syntax_test.py::test_syntax_ PASSED suites/usn_plugin/usn_test.py::test_usn_init PASSED suites/usn_plugin/usn_test.py::test_usn_ PASSED suites/views_plugin/views_test.py::test_views_init PASSED suites/views_plugin/views_test.py::test_views_ PASSED suites/vlv/vlv_test.py::test_vlv_init PASSED suites/vlv/vlv_test.py::test_vlv_ PASSED suites/whoami_plugin/whoami_test.py::test_whoami_init PASSED suites/whoami_plugin/whoam
i_test.py::test_whoami_ PASSED ==================================== ERRORS ==================================== ______________________ ERROR at setup of test_ticket4026 _______________________ request = '<SubRequest' ''\''topology'\''' for '<Function' ''\''test_ticket4026'\''>>' '@pytest.fixture(scope="module")' def 'topology(request):' global installation1_prefix if installation1_prefix: 'args_instance[SER_DEPLOYED_DIR]' = installation1_prefix '#' Creating master 1... master1 = 'DirSrv(verbose=False)' if installation1_prefix: 'args_instance[SER_DEPLOYED_DIR]' = installation1_prefix 'args_instance[SER_HOST]' = HOST_MASTER_1 'args_instance[SER_PORT]' = PORT_MASTER_1 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_1 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master1.allocate(args_master)' instance_master1 = 'master1.exists()' if instance_master1: 'master1.delete()' 'master1.create()' 'master1.open()' 'master1.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_1)' '#' Creating master 2... master2 = 'DirSrv(verbose=False)' if installation1_prefix: 'args_instance[SER_DEPLOYED_DIR]' = installation1_prefix 'args_instance[SER_HOST]' = HOST_MASTER_2 'args_instance[SER_PORT]' = PORT_MASTER_2 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_2 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master2.allocate(args_master)' instance_master2 = 'master2.exists()' if instance_master2: 'master2.delete()' 'master2.create()' 'master2.open()' 'master2.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_2)' '#' Creating master 3... master3 = 'DirSrv(verbose=False)' if installation1_prefix: 'args_instance[SER_DEPLOYED_DIR]' = installation1_prefix 'args_instance[SER_HOST]' = HOST_MASTER_3 'args_instance[SER_PORT]' = PORT_MASTER_3 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_3 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = '
args_instance.copy()' 'master3.allocate(args_master)' instance_master3 = 'master3.exists()' if instance_master3: 'master3.delete()' 'master3.create()' 'master3.open()' 'master3.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_3)' '#' '#' Create all the agreements '#' '#' Creating agreement from master 1 to master 2 properties = '{RA_BINDDN:' 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m1_m2_agmt = 'master1.agreement.create(suffix=SUFFIX,' host=master2.host, port=master2.port, 'properties=properties)' if not m1_m2_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm1_m2_agmt)' '#' Creating agreement from master 1 to master 3 '#' properties = '{RA_NAME:' 'r'\''meTo_$host:$port'\'',' '#' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' '#' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' '#' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' '#' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' '#' m1_m3_agmt = 'master1.agreement.create(suffix=SUFFIX,' host=master3.host, port=master3.port, 'properties=properties)' '#' if not m1_m3_agmt: '#' 'log.fatal("Fail' to create a master '->' master replica 'agreement")' '#' 'sys.exit(1)' '#' 'log.debug("%s' 'created"' % 'm1_m3_agmt)' '#' Creating agreement from master 2 to master 1 properties = '{RA_BINDDN:' 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m2_m1_agmt = 'master2.agreement.create(suffix=SUFFIX,' host=master1.host, port=master1.port, 'properties=properties)' if not m2_m1_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm
2_m1_agmt)' '#' Creating agreement from master 2 to master 3 properties = '{RA_BINDDN:' 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m2_m3_agmt = 'master2.agreement.create(suffix=SUFFIX,' host=master3.host, port=master3.port, 'properties=properties)' if not m2_m3_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm2_m3_agmt)' '#' Creating agreement from master 3 to master 1 '#' properties = '{RA_NAME:' 'r'\''meTo_$host:$port'\'',' '#' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' '#' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' '#' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' '#' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' '#' m3_m1_agmt = 'master3.agreement.create(suffix=SUFFIX,' host=master1.host, port=master1.port, 'properties=properties)' '#' if not m3_m1_agmt: '#' 'log.fatal("Fail' to create a master '->' master replica 'agreement")' '#' 'sys.exit(1)' '#' 'log.debug("%s' 'created"' % 'm3_m1_agmt)' '#' Creating agreement from master 3 to master 2 properties = '{RA_BINDDN:' 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m3_m2_agmt = 'master3.agreement.create(suffix=SUFFIX,' host=master2.host, port=master2.port, 'properties=properties)' if not m3_m2_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm3_m2_agmt)' '#' Allow the replicas to get situated with the new agreements... 'time.sleep(5)' '#' '#' Initialize all the agreements '#' 'master1.agreement.init(SUFFIX,' HOST_MASTER_2, 'PORT_MASTER_2)' 'master1.waitForReplInit(m1_m2_agmt)' 'time.sleep(5)' '#' just to be safe 'master2.agreement.init(SUF
FIX,' HOST_MASTER_3, 'PORT_MASTER_3)' 'master2.waitForReplInit(m2_m3_agmt)' '#' Check replication is working... if 'master1.testReplication(DEFAULT_SUFFIX,' 'master2):' 'log.info('\''Replication' is 'working.'\'')' else: 'log.fatal('\''Replication' is not 'working.'\'')' assert False '#' Delete each instance in the end def 'fin():' for master in '(master1,' master2, 'master3):' 'master.delete()' 'request.addfinalizer(fin)' '#' Clear out the tmp dir 'master1.clearTmpDir(__file__)' '>' return 'TopologyReplication(master1,' master2, 'master3)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48342_test.py>:189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48342_test.py>:29: in __init__ 'master3.open()' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f4b0be1b8>,' saslmethod = None certdir = None, starttls = False, connOnly = False def 'open(self,' saslmethod=None, certdir=None, starttls=False, 'connOnly=False):' ''\'''\'''\''' It opens a ldap bound connection to dirsrv so that online administrative tasks are possible. It binds with the binddn property, then it initializes various fields from DirSrv '(via' '__initPart2)' The state changes '->' DIRSRV_STATE_ONLINE @param self @param saslmethod - None, or GSSAPI @param certdir - Certificate directory for TLS @return None @raise LDAPError ''\'''\'''\''' uri = 'self.toLDAPURL()' if self.verbose: 'self.log.info('\''open():' Connecting to uri '%s'\''' % 'uri)' if 'hasattr(ldap,' ''\''PYLDAP_VERSION'\'')' and MAJOR '>=' 3: 'SimpleLDAPObject.__init__(self,' uri, 'bytes_mode=False)' else: 'SimpleLDAPObject.__init__(self,' 'uri)' if certdir: '"""' We have a certificate directory, so lets start up TLS negotiations '"""' 'self.set_option(ldap.OPT_X_TLS_CACERTFILE,' 'certdir)' if certdir 
or starttls: try: 'self.start_tls_s()' except ldap.LDAPError as e: 'log.fatal('\''TLS' negotiation failed: '%s'\''' % 'str(e))' raise e if saslmethod and 'saslmethod.lower()' == ''\''gssapi'\'':' '"""' Perform kerberos/gssapi authentication '"""' try: sasl_auth = 'ldap.sasl.gssapi("")' 'self.sasl_interactive_bind_s("",' 'sasl_auth)' except ldap.LOCAL_ERROR as e: '#' No Ticket - ultimately invalid credentials 'log.debug("Error:' No Ticket '(%s)"' % 'str(e))' raise ldap.INVALID_CREDENTIALS except ldap.LDAPError as e: 'log.debug("SASL/GSSAPI' Bind Failed: '%s"' % 'str(e))' raise e elif saslmethod: '#' Unknown or unsupported method 'log.debug('\''Unsupported' SASL method: '%s'\''' % 'saslmethod)' raise ldap.UNWILLING_TO_PERFORM elif 'self.can_autobind():' '#' Connect via ldapi, and autobind. '#' do nothing: the bind is complete. if self.verbose: 'log.info("open():' Using root autobind '...")' sasl_auth = 'ldap.sasl.external()' 'self.sasl_interactive_bind_s("",' 'sasl_auth)' else: '"""' Do a simple bind '"""' try: 'self.simple_bind_s(ensure_str(self.binddn),' 'self.bindpw)' except ldap.SERVER_DOWN as e: '#' TODO add server info in exception 'log.debug("Cannot' connect to '%r"' % 'uri)' '>' raise e E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1043: SERVER_DOWN ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bin
d dn pseudo user userPassword: '{SSHA512}a/p4bBr7GKb8rsOeesoQA2qDPb3BAl392SsmGOjdnKwM6oPEs8EqEd6k4v1mfWO7BmNYp9KSVmPXCgdxihkKteHiOH5DDBab' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}BlK3tUgS7nT1AweY729luB752VT5hGnrJ6XfTkUU8SFwXhp+B0qGMLsmLOggkIb1x8YJgzJOuTbUso0p1RlWw3VIjRYkz6JG' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}Z0J01TAznnVbwyS0l0MnrDdFHfklLqvi7omHNEcJrThD5N4uGiMoPuuxHBCZk4Pnja2p0U1xv/stqd+cs0AG3Wj6H9ydggoh' 'DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created INFO:lib389:Starting total init 'cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:lib389:Starting total init 'cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:tickets.ticket48342_test:Replication is working. ____________________ ERROR at setup of test_maxbersize_repl ____________________ request = '<SubRequest' ''\''topology'\''' for '<Function' ''\''test_maxbersize_repl'\''>>' '@pytest.fixture(scope="module")' def 'topology(request):' '"""Create' Replication 'Deployment"""' '#' Creating master 1... if DEBUGGING: master1 = 'DirSrv(verbose=True)' else: master1 = 'DirSrv(verbo
se=False)' 'args_instance[SER_HOST]' = HOST_MASTER_1 'args_instance[SER_PORT]' = PORT_MASTER_1 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_1 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master1.allocate(args_master)' instance_master1 = 'master1.exists()' if instance_master1: 'master1.delete()' 'master1.create()' 'master1.open()' 'master1.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_1)' '#' Creating master 2... if DEBUGGING: master2 = 'DirSrv(verbose=True)' else: master2 = 'DirSrv(verbose=False)' 'args_instance[SER_HOST]' = HOST_MASTER_2 'args_instance[SER_PORT]' = PORT_MASTER_2 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_2 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master2.allocate(args_master)' instance_master2 = 'master2.exists()' if instance_master2: 'master2.delete()' 'master2.create()' 'master2.open()' 'master2.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_2)' '#' '#' Create all the agreements '#' '#' Creating agreement from master 1 to master 2 properties = '{RA_NAME:' ''\''meTo_'\''' + master2.host + ''\'':'\''' + 'str(master2.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m1_m2_agmt = 'master1.agreement.create(suffix=SUFFIX,' host=master2.host, port=master2.port, 'properties=properties)' if not m1_m2_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm1_m2_agmt)' '#' Creating agreement from master 2 to master 1 properties = '{RA_NAME:' ''\''meTo_'\''' + master1.host + ''\'':'\''' + 'str(master1.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICAT
ION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m2_m1_agmt = 'master2.agreement.create(suffix=SUFFIX,' host=master1.host, port=master1.port, 'properties=properties)' if not m2_m1_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm2_m1_agmt)' '#' Allow the replicas to get situated with the new agreements... 'time.sleep(5)' '#' '#' Initialize all the agreements '#' 'master1.agreement.init(SUFFIX,' HOST_MASTER_2, 'PORT_MASTER_2)' '>' 'master1.waitForReplInit(m1_m2_agmt)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return 'self.replica.wait_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init done, haserror = 'self.check_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, '"(objectclass=*)",' 'attrlist)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = 'self.result(res)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = 'self.result2(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp
_msgid, resp_ctrls = 'self.result3(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f542a21b8>' func = '<built-in' method result4 of LDAP object at '0x7f6f54487170>' args = '(17,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ---
------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}6mtvDSnEi0vdJT6SplwM5R7N8lt1f8/6UiCgWORqyUsx6qSp4M0iucrlf9BD9yFLHAfAPaHgE7D2PwIpKQupJBXsCH5PiUPM' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}JtpycIah637r6u2FfUiCJULPGEDZVSnUA5hyemmdgr1q6x+j/zOwPf4u6EKi+lkrs0PCblp6S8UsYrikNgwkaCekrW0IXloN' INFO:lib389:Starting total init 'cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config ______________ ERROR at setup of test_config_listen_backport_size ______________ request = '<SubRequest' ''\''topology'\''' for '<Function' ''\''test_maxbersize_repl'\''>>' '@pytest.fixture(scope="module")' def 'topology(request):' '"""Create' Replication 'Deployment"""' '#' Creating master 1... if DEBUGGING: master1 = 'DirSrv(verbose=True)' else: master1 = 'DirSrv(verbose=False)' 'args_instance[SER_HOST]' = HOST_MASTER_1 'args_instance[SER_PORT]' = PORT_MASTER_1 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_1 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master1.allocate(args_master)' instance_master1 = 'master1.exists()' if instance_master1: 'master1.delete()' 'master1.create()' 'master1.open()' 'master1.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_1)' '#' Creating master 2... if DEBUGGING: master2 = 'DirSrv(verbose=True)' else: master2 = 'DirSrv(verbose=False)' 'args_instance[SER_HOST]' = HOST_MASTER_2 'args_instance[SER_PORT]' = PORT_MASTER_2 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_2 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master =
 'args_instance.copy()' 'master2.allocate(args_master)' instance_master2 = 'master2.exists()' if instance_master2: 'master2.delete()' 'master2.create()' 'master2.open()' 'master2.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_2)' '#' '#' Create all the agreements '#' '#' Creating agreement from master 1 to master 2 properties = '{RA_NAME:' ''\''meTo_'\''' + master2.host + ''\'':'\''' + 'str(master2.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m1_m2_agmt = 'master1.agreement.create(suffix=SUFFIX,' host=master2.host, port=master2.port, 'properties=properties)' if not m1_m2_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm1_m2_agmt)' '#' Creating agreement from master 2 to master 1 properties = '{RA_NAME:' ''\''meTo_'\''' + master1.host + ''\'':'\''' + 'str(master1.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m2_m1_agmt = 'master2.agreement.create(suffix=SUFFIX,' host=master1.host, port=master1.port, 'properties=properties)' if not m2_m1_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm2_m1_agmt)' '#' Allow the replicas to get situated with the new agreements... 'time.sleep(5)' '#' '#' Initialize all the agreements '#' 'master1.agreement.init(SUFFIX,' HOST_MASTER_2, 'PORT_MASTER_2)' '>' 'master1.waitForReplInit(m1_m2_agmt)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-
081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return 'self.replica.wait_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init done, haserror = 'self.check_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, '"(objectclass=*)",' 'attrlist)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = 'self.result(res)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = 'self.result2(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = 'self.result3(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f542a21b8>' func = '<built-in' method result4 of LDAP object at '0x7f6f54487170>' args = '(17,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ________________ ERROR at setup of test_config_deadlock_policy _________________ request = '<SubRequest' ''\''topology'\''' for '<Function' ''\''test_maxbersize_repl'\''>>' '@pytest.fixture(scope="module")' def 'topology(request):' '"""Create' Replication 'Deployment"""' '#' Creating master 1... if DEBUGGING: master1 = 'DirSrv(verbose=True)' else: master1 = 'DirSrv(verbose=False)' 'args_instance[SER_HOST]' = HOST_MASTER_1 'args_instance[SER_PORT]' = PORT_MASTER_1 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_1 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master1.allocate(args_master)' instance_master1 = 'master1.exists()' if instance_master1: 'master1.delete()' 'master1.create()' 'master1.open()' 'master1.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_1)' '#' Creating master 2... if DEBUGGING: master2 = 'DirSrv(verbose=True)' else: master2 = 'DirSrv(verbose=False)' 'args_i
nstance[SER_HOST]' = HOST_MASTER_2 'args_instance[SER_PORT]' = PORT_MASTER_2 'args_instance[SER_SERVERID_PROP]' = SERVERID_MASTER_2 'args_instance[SER_CREATION_SUFFIX]' = DEFAULT_SUFFIX args_master = 'args_instance.copy()' 'master2.allocate(args_master)' instance_master2 = 'master2.exists()' if instance_master2: 'master2.delete()' 'master2.create()' 'master2.open()' 'master2.replica.enableReplication(suffix=SUFFIX,' role=REPLICAROLE_MASTER, 'replicaId=REPLICAID_MASTER_2)' '#' '#' Create all the agreements '#' '#' Creating agreement from master 1 to master 2 properties = '{RA_NAME:' ''\''meTo_'\''' + master2.host + ''\'':'\''' + 'str(master2.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m1_m2_agmt = 'master1.agreement.create(suffix=SUFFIX,' host=master2.host, port=master2.port, 'properties=properties)' if not m1_m2_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm1_m2_agmt)' '#' Creating agreement from master 2 to master 1 properties = '{RA_NAME:' ''\''meTo_'\''' + master1.host + ''\'':'\''' + 'str(master1.port),' RA_BINDDN: 'defaultProperties[REPLICATION_BIND_DN],' RA_BINDPW: 'defaultProperties[REPLICATION_BIND_PW],' RA_METHOD: 'defaultProperties[REPLICATION_BIND_METHOD],' RA_TRANSPORT_PROT: 'defaultProperties[REPLICATION_TRANSPORT]}' m2_m1_agmt = 'master2.agreement.create(suffix=SUFFIX,' host=master1.host, port=master1.port, 'properties=properties)' if not m2_m1_agmt: 'log.fatal("Fail' to create a master '->' master replica 'agreement")' 'sys.exit(1)' 'log.debug("%s' 'created"' % 'm2_m1_agmt)' '#' Allow the replicas to get situated with the new agreements... 'time.sleep(5)' '#' '#' Initialize all the agreements '#' 'master1.agreement.init(SUFFIX,' HOST_MASTER_2, 'PORT_MASTER_2)' '>' 'master1.waitForReplInit(m1_m2_agmt)' <http://vm-058-081.abc.idm.lab.eng.b
rq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return 'self.replica.wait_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init done, haserror = 'self.check_init(agmtdn)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, '"(objectclass=*)",' 'attrlist)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = 'self.result(res)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = 'self.result2(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = 'self.result3(msgid,all,timeout)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrl
s,add_intermediates,add_extop)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f542a21b8>' func = '<built-in' method result4 of LDAP object at '0x7f6f54487170>' args = '(17,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ____________________ ERROR at teardown of test_range_search ____________________ def 'fin():' 'standalone.delete()' if not 'standalone.has_asan():' sbin_dir = 'standalone.get_sbin_dir()' '>' 'valgrind_disable(sbin_dir)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/memory_leaks/range_search_test.py>:61: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ sbin_dir = ''\''/usr/sbin'\''' def 'valgrind_disable(sbin_dir):' ''\'''\'''\''' Restore the ns-slapd binary to its original state - the server instances are expected to be stopped. Note - selinux is enabled at the end of this process. :param sbin_dir - the location of the ns-slapd binary '(e.g.' '/usr/sbin)' :raise ValueError :raise En
vironmentError: If script is not run as ''\''root'\''' ''\'''\'''\''' if 'os.geteuid()' '!=' 0: 'log.error('\''This' script must be run as root to use 'valgrind'\'')' raise EnvironmentError nsslapd_orig = ''\''%s/ns-slapd'\''' % sbin_dir nsslapd_backup = ''\''%s/ns-slapd.original'\''' % sbin_dir '#' Restore the original ns-slapd try: 'shutil.copyfile(nsslapd_backup,' 'nsslapd_orig)' except IOError as e: 'log.fatal('\''valgrind_disable:' failed to restore ns-slapd, error: '%s'\''' % 'e.strerror)' '>' raise 'ValueError('\''failed' to restore ns-slapd, error: '%s'\''' % 'e.strerror)' E ValueError: failed to restore ns-slapd, error: Text file busy <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/utils.py>:288: ValueError ----------------------------- Captured stderr call ----------------------------- INFO:suites.memory_leaks.range_search_test:Running test_range_search... CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user1,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user2,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user3,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user4,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user5,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user6,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user7,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leak
s.range_search_test:test_range_search: Failed to add test user uid=user8,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user9,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user10,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user11,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user12,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user13,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user14,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user15,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user16,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user17,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user18,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user19,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user20,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_s
earch_test:test_range_search: Failed to add test user uid=user21,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user22,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user23,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user24,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user25,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user26,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user27,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user28,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user29,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user30,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user31,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user32,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user33,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_t
est:test_range_search: Failed to add test user uid=user34,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user35,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user36,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user37,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user38,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user39,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user40,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user41,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user42,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user43,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user44,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user45,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user46,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:tes
t_range_search: Failed to add test user uid=user47,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user48,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user49,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user50,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user51,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user52,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user53,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user54,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user55,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user56,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user57,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user58,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user59,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range
_search: Failed to add test user uid=user60,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user61,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user62,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user63,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user64,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user65,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user66,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user67,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user68,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user69,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user70,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user71,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user72,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search
: Failed to add test user uid=user73,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user74,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user75,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user76,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user77,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user78,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user79,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user80,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user81,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user82,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user83,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user84,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user85,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Faile
d to add test user uid=user86,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user87,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user88,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user89,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user90,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user91,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user92,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user93,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user94,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user95,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user96,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user97,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user98,dc=example,dc=com: error 'Can'\''t' contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to ad
d test user uid=user99,dc=example,dc=com: error 'Can'\''t' contact LDAP server INFO:suites.memory_leaks.range_search_test:test_range_search: PASSED --------------------------- Captured stdout teardown --------------------------- Instance slapd-standalone removed. --------------------------- Captured stderr teardown --------------------------- CRITICAL:lib389.utils:valgrind_disable: failed to restore ns-slapd, error: Text file busy =================================== FAILURES =================================== ______________________________ test_ticket1347760 ______________________________ topology = '<tickets.ticket1347760_test.TopologyStandalone' object at '0x7f6f5421c210>' def 'test_ticket1347760(topology):' '"""' Prevent revealing the entry info to whom has no access rights. '"""' 'log.info('\''Testing' Bug 1347760 - Information disclosure via repeated use of LDAP ADD operation, 'etc.'\'')' 'log.info('\''Disabling' accesslog 'logbuffering'\'')' 'topology.standalone.modify_s(CONFIG_DN,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-accesslog-logbuffering'\'',' ''\''off'\'')])' 'log.info('\''Bind' as '{%s,%s}'\''' % '(DN_DM,' 'PASSWORD))' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' 'log.info('\''Adding' ou=%s a bind user belongs 'to.'\''' % 'BOU)' 'topology.standalone.add_s(Entry((BINDOU,' '{' ''\''objectclass'\'':' ''\''top' 'organizationalunit'\''.split(),' ''\''ou'\'':' 'BOU})))' 'log.info('\''Adding' a bind 'user.'\'')' 'topology.standalone.add_s(Entry((BINDDN,' '{'\''objectclass'\'':' '"top' person organizationalPerson 'inetOrgPerson".split(),' ''\''cn'\'':' ''\''bind' 'user'\'',' ''\''sn'\'':' ''\''user'\'',' ''\''userPassword'\'':' 'BINDPW})))' 'log.info('\''Adding' a test 'user.'\'')' 'topology.standalone.add_s(Entry((TESTDN,' '{'\''objectclass'\'':' '"top' person organizationalPerson 'inetOrgPerson".split(),' ''\''cn'\'':' ''\''test' 'user'\'',' ''\''sn'\'':' ''\''user'\'',' ''\''userPassword'\'':' 'TESTPW})))' 'log.info('\''Deleting' aci in '%s.'\''' % 'DEFAULT_SUFFIX)' 'topology.standalone.modify_s(D
EFAULT_SUFFIX,' '[(ldap.MOD_DELETE,' ''\''aci'\'',' 'None)])' 'log.info('\''Bind' case 1. the bind user has no rights to read the entry itself, bind should be 'successful.'\'')' 'log.info('\''Bind' as '{%s,%s}' who has no access 'rights.'\''' % '(BINDDN,' 'BINDPW))' try: 'topology.standalone.simple_bind_s(BINDDN,' 'BINDPW)' except ldap.LDAPError as e: 'log.info('\''Desc' \' + 'e.message['\''desc'\''])' assert False file_path = 'os.path.join(topology.standalone.prefix,' ''\''var/log/dirsrv/slapd-%s/access'\''' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/access'\''' tickets/ticket1347760_test.py:236: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket1347760_test:Testing Bug 1347760 - Information disclosure via repeated use of LDAP ADD operation, etc. INFO:tickets.ticket1347760_test:Disabling accesslog logbuffering INFO:tickets.ticket1347760_test:Bind as '{cn=Directory' 'Manager,password}' INFO:tickets.ticket1347760_test:Adding ou=BOU a bind user belongs to. INFO:tickets.ticket1347760_test:Adding a bind user. INFO:tickets.ticket1347760_test:Adding a test user. INFO:tickets.ticket1347760_test:Deleting aci in dc=example,dc=com. INFO:tickets.ticket1347760_test:Bind case 1. the bind user has no rights to read the entry itself, bind should be successful. INFO:tickets.ticket1347760_test:Bind as '{uid=buser123,ou=BOU,dc=example,dc=com,buser123}' who has no access rights. ______________________________ test_ticket47431_1 ______________________________ topology = '<tickets.ticket47431_test.TopologyStandalone' object at '0x7f6f5397f9d0>' def 'test_ticket47431_1(topology):' ''\'''\'''\''' nsslapd-pluginarg0: uid nsslapd-pluginarg1: mail nsslapd-pluginarg2: userpassword '<==' repeat 27 times nsslapd-pluginarg3: , nsslapd-plug
inarg4: dc=example,dc=com The duplicated values are removed by str2entry_dupcheck as follows: '[..]' - str2entry_dupcheck: 27 duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit check,cn=plugins,cn=config. Extra values ignored. ''\'''\'''\''' 'log.info("Ticket' 47431 - 1: Check 26 duplicate values are treated as 'one...")' expected = '"str2entry_dupcheck' - . .. .cache duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit 'check,cn=plugins,cn=config."' 'log.debug('\''modify_s' '%s'\''' % 'DN_7BITPLUGIN)' try: 'topology.standalone.modify_s(DN_7BITPLUGIN,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-pluginarg0'\'',' '"uid"),' '(ldap.MOD_REPLACE,' ''\''nsslapd-pluginarg1'\'',' '"mail"),' '(ldap.MOD_REPLACE,' ''\''nsslapd-pluginarg2'\'',' '"userpassword"),' '(ldap.MOD_REPLACE,' ''\''nsslapd-pluginarg3'\'',' '","),' '(ldap.MOD_REPLACE,' ''\''nsslapd-pluginarg4'\'',' 'SUFFIX)])' except ValueError: 'log.error('\''modify' failed: Some problem occured with a value that was 'provided'\'')' assert False arg2 = '"nsslapd-pluginarg2:' 'userpassword"' 'topology.standalone.stop(timeout=10)' dse_ldif = topology.standalone.confdir + ''\''/dse.ldif'\''' 'os.system('\''mv' %s '%s.47431'\''' % '(dse_ldif,' 'dse_ldif))' 'os.system('\''sed' -e '"s/\\(%s\\)/\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1/"' %s.47431 '>' '%s'\''' % '(arg2,' dse_ldif, 'dse_ldif))' 'topology.standalone.start(timeout=10)' cmdline = ''\''egrep' -i '"%s"' '%s'\''' % '(expected,' 'topology.standalone.errlog)' p = 'os.popen(cmdline,' '"r")' line = 'p.readline()' if line == '"":' 'log.error('\''Expected' error '"%s"' not logged in '%s'\''' % '(expected,' 'topology.standalone.errlog))' '>' assert False E assert False tickets/ticket47431_test.py:110: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47431_test:Ticket 47431 - 1: Check 26 dupl
icate values are treated as one... DEBUG:tickets.ticket47431_test:modify_s cn=7-bit check,cn=plugins,cn=config grep: /var/log/dirsrv/slapd-standalone/error: No such file or directory ERROR:tickets.ticket47431_test:Expected error '"str2entry_dupcheck' - . .. .cache duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit 'check,cn=plugins,cn=config."' not logged in /var/log/dirsrv/slapd-standalone/error _______________________________ test_ticket47462 _______________________________ topology = '<tickets.ticket47462_test.TopologyMaster1Master2' object at '0x7f6f54036d90>' def 'test_ticket47462(topology):' '"""' Test that AES properly replaces DES during an update/restart, and that replication also works correctly. '"""' '#' '#' First set config as if 'it'\''s' an older version. Set DES to use '#' libdes-plugin, MMR to depend on DES, delete the existing AES plugin, '#' and set a DES password for the replication agreement. '#' '#' Add an extra attribute to the DES plugin args '#' try: 'topology.master1.modify_s(DES_PLUGIN,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-pluginEnabled'\'',' ''\''on'\'')])' except ldap.LDAPError as e: 'log.fatal('\''Failed' to enable DES plugin, error: \' + 'e.message['\''desc'\''])' assert False try: 'topology.master1.modify_s(DES_PLUGIN,' '[(ldap.MOD_ADD,' ''\''nsslapd-pluginarg2'\'',' ''\''description'\'')])' except ldap.LDAPError as e: 'log.fatal('\''Failed' to reset DES plugin, error: \' + 'e.message['\''desc'\''])' assert False try: 'topology.master1.modify_s(MMR_PLUGIN,' '[(ldap.MOD_DELETE,' ''\''nsslapd-plugin-depends-on-named'\'',' ''\''AES'\'')])' except ldap.NO_SUCH_ATTRIBUTE: pass except ldap.LDAPError as e: 'log.fatal('\''Failed' to reset MMR plugin, error: \' + 'e.message['\''desc'\''])' assert False '#' '#' Delete the AES plugin '#' try: 'topology.master1.delete_s(AES_PLUGIN)' except ldap.NO_SUCH_OBJECT: pass except ldap.LDAPError as e: 'log.fatal('\''Failed' to delete AES plugin, error: \' + 'e.message['\''desc'\''])' assert False '#' restart the server so 
we must use DES plugin 'topology.master1.restart(timeout=10)' '#' '#' Get the agmt dn, and set the password '#' try: entry = 'topology.master1.search_s('\''cn=config'\'',' ldap.SCOPE_SUBTREE, ''\''objectclass=nsDS5ReplicationAgreement'\'')' if entry: agmt_dn = 'entry[0].dn' 'log.info('\''Found' agmt dn '(%s)'\''' % 'agmt_dn)' else: 'log.fatal('\''No' replication 'agreements!'\'')' assert False except ldap.LDAPError as e: 'log.fatal('\''Failed' to search for replica credentials: \' + 'e.message['\''desc'\''])' assert False try: properties = '{RA_BINDPW:' '"password"}' 'topology.master1.agreement.setProperties(None,' agmt_dn, None, 'properties)' 'log.info('\''Successfully' modified replication 'agreement'\'')' except ValueError: 'log.error('\''Failed' to update replica agreement: \' + 'AGMT_DN)' assert False '#' '#' Check replication works with the new DES password '#' try: 'topology.master1.add_s(Entry((USER1_DN,' '{'\''objectclass'\'':' '"top' 'person".split(),' ''\''sn'\'':' ''\''sn'\'',' ''\''description'\'':' ''\''DES' value to 'convert'\'',' ''\''cn'\'':' ''\''test_user'\''})))' loop = 0 ent = None while loop '<=' 10: try: ent = 'topology.master2.getEntry(USER1_DN,' ldap.SCOPE_BASE, '"(objectclass=*)")' break except ldap.NO_SUCH_OBJECT: 'time.sleep(1)' loop += 1 if not ent: 'log.fatal('\''Replication' test failed fo 'user1!'\'')' assert False else: 'log.info('\''Replication' test 'passed'\'')' except ldap.LDAPError as e: 'log.fatal('\''Failed' to add test user: \' + 'e.message['\''desc'\''])' assert False '#' '#' Add a backend '(that' has no 'entries)' '#' try: 'topology.master1.backend.create("o=empty",' '{BACKEND_NAME:' '"empty"})' except ldap.LDAPError as e: 'log.fatal('\''Failed' to create extra/empty backend: \' + 'e.message['\''desc'\''])' assert False '#' '#' Run the upgrade... '#' '>' 'topology.master1.upgrade('\''online'\'')' tickets/ticket47462_test.py:269: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:2500: in upgrade 'DirSrvTools
.runUpgrade(self.prefix,' 'online)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ prefix = ''\''/usr'\'',' online = True @staticmethod def 'runUpgrade(prefix,' 'online=True):' ''\'''\'''\''' Run '"setup-ds.pl' '--update"' We simply pass in one DirSrv isntance, and this will update all the instances that are in this prefix. For the update to work we must fix/adjust the permissions of the scripts in: '/prefix/lib[64]/dirsrv/slapd-INSTANCE/' ''\'''\'''\''' if not prefix: prefix = ''\'''\''' '#' This is an RPM run - check if /lib exists, if not use /lib64 if 'os.path.isdir('\''/usr/lib/dirsrv'\''):' libdir = ''\''/usr/lib/dirsrv/'\''' else: if 'os.path.isdir('\''/usr/lib64/dirsrv'\''):' libdir = ''\''/usr/lib64/dirsrv/'\''' else: 'log.fatal('\''runUpgrade:' failed to find slapd lib 'dir!'\'')' assert False else: '#' Standard prefix lib location if 'os.path.isdir('\''/usr/lib64/dirsrv'\''):' libdir = ''\''/usr/lib64/dirsrv/'\''' else: libdir = ''\''/lib/dirsrv/'\''' '#' Gather all the instances so we can adjust the permissions, otherwise servers = '[]' path = prefix + ''\''/etc/dirsrv'\''' '>' for files in 'os.listdir(path):' E OSError: '[Errno' '2]' No such file or directory: ''\''/usr/etc/dirsrv'\''' ../../../lib389/lib389/tools.py:932: OSError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}8CwmSw5cC9cNSfTE4dILAhrRU2CVrnAUPnumNxhRizwGHMk83wdZJG9W6TjgWxV0E+taSeLIRbssrAoWhPGAImebdYNn6Aai' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn
=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}AIs0j16jV540vzUgF1dHce45PjVrFPVT1FYhnNFBCJeQa59urY7h3wgzDCzRumtpgo4v20EP9vDJPZzKFNRr87tgQe6mUZsK' 'DEBUG:tickets.ticket47462_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created INFO:lib389:Starting total init 'cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:tickets.ticket47462_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47462_test:Found agmt dn '(cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' 'tree,cn=config)' INFO:tickets.ticket47462_test:Successfully modified replication agreement INFO:tickets.ticket47462_test:Replication test passed INFO:lib389:List backend with suffix=o=empty INFO:lib389:Creating a local backend INFO:lib389:List backend cn=empty,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=empty,cn=ldbm database,cn=plugins,cn=config cn: empty nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-master_1/db/empty nsslapd-dncachememsize: 10485760 nsslapd-readonly: off nsslapd-require-index: off nsslapd-suffix: o=empty objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance _______________________________ test_ticket47536 _______________________________ topology = '<tickets.ticket47536_test.TopologyReplication' object at '0x7f6f5320fcd0>' def 'test_ticket47536(topology):' '"""' Set up 2way MMR: master_1 ----- startTLS '----->' master_2 master_1 '<--' TLS_clientAuth -- master_2 Check CA cert, Server-Cert and Key are retrieved as PEM from cert db when the server is started. First, the file names are not specified and the default names derived from the cert nicknames. Next, the file names are specified in the encryption config entries. Each time add 5 entries to master 1 and 2 and check they are replicated. '"
""' 'log.info("Ticket' 47536 - Allow usage of OpenLDAP libraries that 'don'\''t' use NSS for 'crypto")' 'create_keys_certs(topology)' 'config_tls_agreements(topology)' 'add_entry(topology.master1,' ''\''master1'\'',' ''\''uid=m1user'\'',' 0, '5)' 'add_entry(topology.master2,' ''\''master2'\'',' ''\''uid=m2user'\'',' 0, '5)' 'time.sleep(1)' 'log.info('\''#####' Searching for entries on 'master1...'\'')' entries = 'topology.master1.search_s(DEFAULT_SUFFIX,' ldap.SCOPE_SUBTREE, ''\''(uid=*)'\'')' assert 10 == 'len(entries)' 'log.info('\''#####' Searching for entries on 'master2...'\'')' entries = 'topology.master2.search_s(DEFAULT_SUFFIX,' ldap.SCOPE_SUBTREE, ''\''(uid=*)'\'')' '>' assert 10 == 'len(entries)' E assert 10 == 5 E + where 5 = 'len([dn:' 'uid=m2user0,dc=example,dc=com\ncn:' master2 'user0\nobjectClass:' 'top\nobjectClass:' 'person\nobjectClass:' extensibleObjec...er2 'user4\nobjectClass:' 'top\nobjectClass:' 'person\nobjectClass:' 'extensibleObject\nsn:' 'user4\nuid:' 'uid=m2user4\nuid:' 'm2user4\n\n])' tickets/ticket47536_test.py:494: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}K5tzYTZnFrMEK/x52hThR4iWTzvVSiDHxIQvFhEmwIhq4YciL9UKq6yJb0Or15Vb1yuwdNP5uGlfiK56adL1wuNxnFX3w8lU' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}yV0SJdxZdeu4gT3YKPyKCMtGrBD7EbizR0JsgsRg6XYKVyQVVykD6aAkBre3sS0j20zFutsc7o7VGYBhD+m3OrKNN
h7IOKCj' 'DEBUG:tickets.ticket47536_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:tickets.ticket47536_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created INFO:lib389:Starting total init 'cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:tickets.ticket47536_test:Replication is working. ----------------------------- Captured stdout call ----------------------------- Is this a CA certificate '[y/N]?' Enter the path length constraint, enter to skip '[<0' for unlimited 'path]:' '>' Is this a critical extension '[y/N]?' pk12util: PKCS12 EXPORT SUCCESSFUL pk12util: PKCS12 IMPORT SUCCESSFUL ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47536_test:Ticket 47536 - Allow usage of OpenLDAP libraries that 'don'\''t' use NSS for crypto INFO:tickets.ticket47536_test: '#########################' Creating SSL Keys and Certs '######################' INFO:tickets.ticket47536_test:##### shutdown master1 INFO:tickets.ticket47536_test:##### Creating a password file INFO:tickets.ticket47536_test:##### create the pin file INFO:tickets.ticket47536_test:##### Creating a noise file INFO:tickets.ticket47536_test:##### Create key3.db and cert8.db database '(master1):' '['\''certutil'\'',' ''\''-N'\'',' ''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_1'\'',' ''\''-f'\'',' ''\''/etc/dirsrv/slapd-master_1/pwdfile.txt'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating encryption key for CA '(master1):' '['\''certutil'\'',' ''\''-G'\'',' ''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_1'\'',' ''\''-z'\'',' ''\''/etc/dirsrv/slapd-master_1/noise.txt'\'',' ''\''-f'\'',' ''\''/etc/dirsrv/slapd-master_1/pwdfile.txt'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating self-signed
 CA certificate '(master1)' -- nickname CAcertificate Generating key. This may take a few moments... INFO:tickets.ticket47536_test:##### Creating Server certificate -- nickname Server-Cert1: '['\''certutil'\'',' ''\''-S'\'',' ''\''-n'\'',' ''\''Server-Cert1'\'',' ''\''-s'\'',' ''\''CN=localhost.localdomain,OU=389' Directory 'Server'\'',' ''\''-c'\'',' ''\''CAcertificate'\'',' ''\''-t'\'',' ''\'',,'\'',' ''\''-m'\'',' ''\''1001'\'',' ''\''-v'\'',' ''\''120'\'',' ''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_1'\'',' ''\''-z'\'',' ''\''/etc/dirsrv/slapd-master_1/noise.txt'\'',' ''\''-f'\'',' ''\''/etc/dirsrv/slapd-master_1/pwdfile.txt'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating Server certificate -- nickname Server-Cert2: '['\''certutil'\'',' ''\''-S'\'',' ''\''-n'\'',' ''\''Server-Cert2'\'',' ''\''-s'\'',' ''\''CN=localhost.localdomain,OU=390' Directory 'Server'\'',' ''\''-c'\'',' ''\''CAcertificate'\'',' ''\''-t'\'',' ''\'',,'\'',' ''\''-m'\'',' ''\''1002'\'',' ''\''-v'\'',' ''\''120'\'',' ''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_1'\'',' ''\''-z'\'',' ''\''/etc/dirsrv/slapd-master_1/noise.txt'\'',' ''\''-f'\'',' ''\''/etc/dirsrv/slapd-master_1/pwdfile.txt'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### start master1 INFO:tickets.ticket47536_test:##### enable SSL in master1 with all ciphers INFO:tickets.ticket47536_test: '#########################' Enabling SSL LDAPSPORT 41636 '######################' INFO:tickets.ticket47536_test:##### Check the cert db: '['\''certutil'\'',' ''\''-L'\'',' ''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_1'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: INFO:tickets.ticket47536_test: Certificate Nickname Trust Attributes INFO:tickets.ticket47536_test: SSL,S/MIME,JAR/XPI INFO:tickets.ticket47536_test: INFO:tickets.ticket47536_test: CAcertificate CTu,u,u INFO:tickets.ticket47536_test: Server-Cert2 u,u,u INFO:tickets.tick
et47536_test: Server-Cert1 u,u,u INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test:##### Check PEM files of master1 '(before' setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: '#########################' Check PEM files '(CAcertificate,' Server-Cert1, 'Server-Cert1-Key)' not in /etc/dirsrv/slapd-master_1 '######################' INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/CAcertificate.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1-Key.pem is correctly not generated. INFO:tickets.ticket47536_test:##### Set on to nsslapd-extract-pemfiles INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test:##### Check PEM files of master1 '(after' setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: '#########################' Check PEM files '(CAcertificate,' Server-Cert1, 'Server-Cert1-Key)' in /etc/dirsrv/slapd-master_1 '######################' INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/CAcertificate.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1-Key.pem is successfully generated. INFO:tickets.ticket47536_test:##### Extract PK12 file for master2: pk12util -o /tmp/Server-Cert2.pk12 -n '"Server-Cert2"' -d /etc/dirsrv/slapd-master_1 -w /etc/dirsrv/slapd-master_1/pwdfile.txt -k /etc/dirsrv/slapd-master_1/pwdfile.txt INFO:tickets.ticket47536_test:##### Check PK12 files INFO:tickets.ticket47536_test:/tmp/Server-Cert2.pk12 is successfully extracted. INFO:tickets.ticket47536_test:##### stop master2 INFO:tickets.ticket47536_test:##### Initialize Cert DB for master2 INFO:tickets.ticket47536_test:##### Create key3.db and cert8.db database '(master2):' '['\''certutil'\'',' ''\''-N'\'',' 
''\''-d'\'',' ''\''/etc/dirsrv/slapd-master_2'\'',' ''\''-f'\'',' ''\''/etc/dirsrv/slapd-master_1/pwdfile.txt'\'']' INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Import certs to master2 INFO:tickets.ticket47536_test:Importing CAcertificate INFO:tickets.ticket47536_test:##### Importing Server-Cert2 to master2: pk12util -i /tmp/Server-Cert2.pk12 -n '"Server-Cert2"' -d /etc/dirsrv/slapd-master_2 -w /etc/dirsrv/slapd-master_1/pwdfile.txt -k /etc/dirsrv/slapd-master_1/pwdfile.txt INFO:tickets.ticket47536_test:copy /etc/dirsrv/slapd-master_1/pin.txt to /etc/dirsrv/slapd-master_2/pin.txt INFO:tickets.ticket47536_test:##### start master2 INFO:tickets.ticket47536_test:##### enable SSL in master2 with all ciphers INFO:tickets.ticket47536_test: '#########################' Enabling SSL LDAPSPORT 42636 '######################' INFO:tickets.ticket47536_test:##### restart master2 INFO:tickets.ticket47536_test:##### Check PEM files of master2 '(before' setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: '#########################' Check PEM files '(CAcertificate,' Server-Cert2, 'Server-Cert2-Key)' not in /etc/dirsrv/slapd-master_2 '######################' INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/CAcertificate.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2-Key.pem is correctly not generated. INFO:tickets.ticket47536_test:##### Set on to nsslapd-extract-pemfiles INFO:tickets.ticket47536_test:##### restart master2 INFO:tickets.ticket47536_test:##### Check PEM files of master2 '(after' setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: '#########################' Check PEM files '(CAcertificate,' Server-Cert2, 'Server-Cert2-Key)' in /etc/dirsrv/slapd-master_2 '######################' INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/CAcertificate.pem is successfully gene
rated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2-Key.pem is successfully generated. INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test: '#########################' Creating SSL Keys and Certs Done '######################' INFO:tickets.ticket47536_test:######################### Configure SSL/TLS agreements '######################' INFO:tickets.ticket47536_test:######################## master1 -- startTLS '->' master2 '#####################' INFO:tickets.ticket47536_test:##################### master1 '<-' tls_clientAuth -- master2 '##################' INFO:tickets.ticket47536_test:##### Update the agreement of master1 INFO:tickets.ticket47536_test:##### Add the cert to the repl manager on master1 INFO:tickets.ticket47536_test:##### master2 Server Cert in base64 format: MIICyjCCAbKgAwIBAgICA+owDQYJKoZIhvcNAQELBQAwETEPMA0GA1UEAxMGQ0FjZXJ0MB4XDTE2MTAyNjIyMzYyNVoXDTI2MTAyNjIyMzYyNVowPzEdMBsGA1UECxMUMzkwIERpcmVjdG9yeSBTZXJ2ZXIxHjAcBgNVBAMTFWxvY2FsaG9zdC5sb2NhbGRvbWFpbjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALC7qxyr+VzolngwPavncqGfwub2xscF3soJRI5DD9qGWUubKTzQpmXST0gjC8vpSJK/nY1w07DgeDYgpQX9u7zdEU+DAvSiT+6TQJjEbEtZieeWMe2EKpNkVWBP/uWepMnWJK+SIp4j58ZpthEfvU0xGRLxizCxLqYoAMH3/v9Lx9XbryrcAdyCkUn81n1KffA90LoD5nnElG4fM+urH+pHdsTSdJrekb8+XGlACDYKEdd2idAZEKeYGuU0jc9CpEaps+cTHHg593kRan+I6+BzrpMEu9Q3vlrVCITvNBbOGMrCkxbbr9QrcKYpFSmac7Pu/b95b/Gg/DdClPMmd/cCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAjCE+zgRBx+EJQIlwCGRqj1fTDztniK8anYTAurlrWqrbFaXTVx+2Es021CiYIgm8+Yca+8bCpiRbixtdQo6sPdKCtDVoJQq9FfzicU4KEk99djvZKvjD0HUyOhlc6VNLEAm6aKqKPOwpaJdvQ0Gfc3MYr1cE8MqWsV/pRikGtFiP7OPHTC/ObzwCkaOMq1yKwQAJu+MBYXVue8C+nbIl6IRq3mF3LVG7t98wRUNYRoAjm9lEK6YAE5OTeQZ6XGp1QkTN2stmXLOWlXLQczye47RZB+0J4VizaJWY9Gk9Lz1XPTwa0/SSBucsSs+NM5Vq8x6GI5XsxvHm6clRWNl96w== INFO:tickets.ticket47536_test:##### Replication manager on master1: cn=replrepl,cn=config INFO:tickets.ticket47536_test: ObjectClass: INFO:tickets.ti
cket47536_test: : top INFO:tickets.ticket47536_test: : person INFO:tickets.ticket47536_test:##### Modify the certmap.conf on master1 INFO:tickets.ticket47536_test:##### Update the agreement of master2 INFO:tickets.ticket47536_test: '#########################' Configure SSL/TLS agreements Done '######################' INFO:tickets.ticket47536_test: '#########################' Adding 5 entries to master1 '######################' INFO:tickets.ticket47536_test: '#########################' Adding 5 entries to master2 '######################' INFO:tickets.ticket47536_test:##### Searching for entries on master1... INFO:tickets.ticket47536_test:##### Searching for entries on master2... ____________________________ test_ticket47619_init _____________________________ topology = 'Master[localhost.localdomain:38941]' '->' 'Consumer[localhost.localdomain:38961' def 'test_ticket47619_init(topology):' '"""' Initialize the test environment '"""' 'topology.master.plugins.enable(name=PLUGIN_RETRO_CHANGELOG)' '#topology.master.plugins.enable(name=PLUGIN_MEMBER_OF)' '#topology.master.plugins.enable(name=PLUGIN_REFER_INTEGRITY)' 'topology.master.stop(timeout=10)' 'topology.master.start(timeout=10)' 'topology.master.log.info("test_ticket47619_init' topology '%r"' % '(topology))' '#' the test case will check if a warning message is logged in the '#' error log of the supplier '>' topology.master.errorlog_file = 'open(topology.master.errlog,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/var/log/dirsrv/slapd-master_1/error'\''' tickets/ticket47619_test.py:141: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user 
cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}dsALidbbI6PEnNGByoEOxdgIeGxVDT8T1P4mM9gqhtxHjMNN9GSnlIiI4BuoaMmg68VOmL++tH767EiSQv4btcawIYKnGOyd' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}uXnyTgdGovymqsYfwFsUO8rrsEO4SbeRuKlM5tt/DzPtqkj5n5+dI07YfCUXdcWSCAIhid8RqHaWX6eLQoO5IXFyOiRhXqlg' 'DEBUG:tickets.ticket47619_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created INFO:lib389:Starting total init 'cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:tickets.ticket47619_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:lib389:test_ticket47619_init topology 'Master[localhost.localdomain:38941]' '->' 'Consumer[localhost.localdomain:38961' _____________________________ test_ticket47653_add _____________________________ topology = '<tickets.ticket47653MMR_test.TopologyMaster1Master2' object at '0x7f6f543aa610>' def 'test_ticket47653_add(topology):' ''\'''\'''\''' This test ADD an entry on MASTER1 where 47653 is fixed. Then it checks that entry is replicated on MASTER2 '(even' if on MASTER2 47653 is NOT 'fixed).' Then update on MASTER2 and check the update on MASTER1 It checks that, bound as bind_entry, - we can not ADD an entry without the proper SELFDN aci. - with the proper ACI we can not ADD with ''\''member'\''' attribute - with the proper ACI and ''\''member'\''' it succeeds to ADD ''\'''\'''\''' 'topology.master1.log.info("\n\n#########################' ADD '######################\n")' '#' bind as bind_entry 'topology.master1.log.info("Bind' as '%s"' % 'BIND_DN)' 'topology.master1.simple_bind_s(BIND_DN,' 'BIND_PW)' '#' Prepare the entry with multivalued members entry_with_members = 'Entry(ENTRY_DN)' 'entry_with_members.setValues('\''object
class'\'',' ''\''top'\'',' ''\''person'\'',' ''\''OCticket47653'\'')' 'entry_with_members.setValues('\''sn'\'',' 'ENTRY_NAME)' 'entry_with_members.setValues('\''cn'\'',' 'ENTRY_NAME)' 'entry_with_members.setValues('\''postalAddress'\'',' ''\''here'\'')' 'entry_with_members.setValues('\''postalCode'\'',' ''\''1234'\'')' members = '[]' for cpt in 'range(MAX_OTHERS):' name = '"%s%d"' % '(OTHER_NAME,' 'cpt)' 'members.append("cn=%s,%s"' % '(name,' 'SUFFIX))' 'members.append(BIND_DN)' 'entry_with_members.setValues('\''member'\'',' 'members)' '#' Prepare the entry with only one member value entry_with_member = 'Entry(ENTRY_DN)' 'entry_with_member.setValues('\''objectclass'\'',' ''\''top'\'',' ''\''person'\'',' ''\''OCticket47653'\'')' 'entry_with_member.setValues('\''sn'\'',' 'ENTRY_NAME)' 'entry_with_member.setValues('\''cn'\'',' 'ENTRY_NAME)' 'entry_with_member.setValues('\''postalAddress'\'',' ''\''here'\'')' 'entry_with_member.setValues('\''postalCode'\'',' ''\''1234'\'')' member = '[]' 'member.append(BIND_DN)' 'entry_with_member.setValues('\''member'\'',' 'member)' '#' entry to add WITH member being BIND_DN but WITHOUT the ACI '->' ldap.INSUFFICIENT_ACCESS try: 'topology.master1.log.info("Try' to add Add %s '(aci' is 'missing):' '%r"' % '(ENTRY_DN,' 'entry_with_member))' 'topology.master1.add_s(entry_with_member)' except Exception as e: 'topology.master1.log.info("Exception' '(expected):' '%s"' % 'type(e).__name__)' assert 'isinstance(e,' 'ldap.INSUFFICIENT_ACCESS)' '#' Ok Now add the proper ACI 'topology.master1.log.info("Bind' as %s and add the ADD SELFDN 'aci"' % 'DN_DM)' 'topology.master1.simple_bind_s(DN_DM,' 'PASSWORD)' ACI_TARGET = '"(target' = '\"ldap:///cn=*,%s\";)"' % SUFFIX ACI_TARGETFILTER = '"(targetfilter' '=\"(objectClass=%s)\")"' % OC_NAME ACI_ALLOW = '"(version' '3.0;' acl '\"SelfDN' 'add\";' allow '(add)"' ACI_SUBJECT = '"' userattr = '\"member#selfDN\";)"' ACI_BODY = ACI_TARGET + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT mod = '[(ldap.MOD_ADD,' ''\''aci'\'',' 'ACI_BODY)]' 'topology.master1.modify
_s(SUFFIX,' 'mod)' 'time.sleep(1)' '#' bind as bind_entry 'topology.master1.log.info("Bind' as '%s"' % 'BIND_DN)' 'topology.master1.simple_bind_s(BIND_DN,' 'BIND_PW)' '#' entry to add WITHOUT member and WITH the ACI '->' ldap.INSUFFICIENT_ACCESS try: 'topology.master1.log.info("Try' to add Add %s '(member' is 'missing)"' % 'ENTRY_DN)' 'topology.master1.add_s(Entry((ENTRY_DN,' '{' ''\''objectclass'\'':' 'ENTRY_OC.split(),' ''\''sn'\'':' ENTRY_NAME, ''\''cn'\'':' ENTRY_NAME, ''\''postalAddress'\'':' ''\''here'\'',' ''\''postalCode'\'':' ''\''1234'\''})))' except Exception as e: 'topology.master1.log.info("Exception' '(expected):' '%s"' % 'type(e).__name__)' assert 'isinstance(e,' 'ldap.INSUFFICIENT_ACCESS)' '#' entry to add WITH memberS and WITH the ACI '->' ldap.INSUFFICIENT_ACCESS '#' member should contain only one value try: 'topology.master1.log.info("Try' to add Add %s '(with' several member 'values)"' % 'ENTRY_DN)' 'topology.master1.add_s(entry_with_members)' except Exception as e: 'topology.master1.log.info("Exception' '(expected):' '%s"' % 'type(e).__name__)' assert 'isinstance(e,' 'ldap.INSUFFICIENT_ACCESS)' 'topology.master1.log.info("Try' to add Add %s should be 'successful"' % 'ENTRY_DN)' try: 'topology.master1.add_s(entry_with_member)' except ldap.LDAPError as e: 'topology.master1.log.info("Failed' to add entry, error: '"' + 'e.message['\''desc'\''])' '>' assert False E assert False tickets/ticket47653MMR_test.py:305: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '#########################' ADD '######################' INFO:lib389:Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com '(aci' is 'missing):' dn: cn=test_entry, dc=example,dc=com cn: test_entry member: cn=bind_entry, dc=example,dc=com objectclass: top objectclass: person objectclass: OCticket47653 postalAddress: here postalCode: 1234 sn: test_entry INFO:lib389:Exception '(expected):' INSUFFICIENT_ACCESS INFO:lib389:Bind as cn=Direct
ory Manager and add the ADD SELFDN aci INFO:lib389:Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com '(member' is 'missing)' INFO:lib389:Exception '(expected):' INSUFFICIENT_ACCESS INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com '(with' several member 'values)' INFO:lib389:Exception '(expected):' INSUFFICIENT_ACCESS INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com should be successful INFO:lib389:Failed to add entry, error: Insufficient access ___________________________ test_ticket47653_modify ____________________________ topology = '<tickets.ticket47653MMR_test.TopologyMaster1Master2' object at '0x7f6f543aa610>' def 'test_ticket47653_modify(topology):' ''\'''\'''\''' This test MOD an entry on MASTER1 where 47653 is fixed. Then it checks that update is replicated on MASTER2 '(even' if on MASTER2 47653 is NOT 'fixed).' Then update on MASTER2 '(bound' as 'BIND_DN).' This update may fail whether or not 47653 is fixed on MASTER2 It checks that, bound as bind_entry, - we can not modify an entry without the proper SELFDN aci. - adding the ACI, we can modify the entry ''\'''\'''\''' '#' bind as bind_entry 'topology.master1.log.info("Bind' as '%s"' % 'BIND_DN)' 'topology.master1.simple_bind_s(BIND_DN,' 'BIND_PW)' 'topology.master1.log.info("\n\n#########################' MODIFY '######################\n")' '#' entry to modify WITH member being BIND_DN but WITHOUT the ACI '->' ldap.INSUFFICIENT_ACCESS try: 'topology.master1.log.info("Try' to modify %s '(aci' is 'missing)"' % 'ENTRY_DN)' mod = '[(ldap.MOD_REPLACE,' ''\''postalCode'\'',' ''\''9876'\'')]' 'topology.master1.modify_s(ENTRY_DN,' 'mod)' except Exception as e: 'topology.master1.log.info("Exception' '(expected):' '%s"' % 'type(e).__name__)' assert 'isinstance(e,' 'ldap.INSUFFICIENT_ACCESS)' '#' Ok Now add the proper ACI 'topology.master1.log.info("Bind' as %s and add the WRITE SELFDN 'aci"' % 'DN_DM)' 'topology.master1.simple_bind_s(DN_DM,' 'PASSWORD)' ACI_TARGET = '"(target' = '\"ldap:///c
n=*,%s\")"' % SUFFIX ACI_TARGETATTR = '"(targetattr' = '*)"' ACI_TARGETFILTER = '"(targetfilter' '=\"(objectClass=%s)\")"' % OC_NAME ACI_ALLOW = '"(version' '3.0;' acl '\"SelfDN' 'write\";' allow '(write)"' ACI_SUBJECT = '"' userattr = '\"member#selfDN\";)"' ACI_BODY = ACI_TARGET + ACI_TARGETATTR + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT mod = '[(ldap.MOD_ADD,' ''\''aci'\'',' 'ACI_BODY)]' 'topology.master1.modify_s(SUFFIX,' 'mod)' 'time.sleep(1)' '#' bind as bind_entry 'topology.master1.log.info("M1:' Bind as '%s"' % 'BIND_DN)' 'topology.master1.simple_bind_s(BIND_DN,' 'BIND_PW)' '#' modify the entry and checks the value 'topology.master1.log.info("M1:' Try to modify %s. It should 'succeeds"' % 'ENTRY_DN)' mod = '[(ldap.MOD_REPLACE,' ''\''postalCode'\'',' ''\''1928'\'')]' '>' 'topology.master1.modify_s(ENTRY_DN,' 'mod)' tickets/ticket47653MMR_test.py:387: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:402: in modify_s return 'self.result(msgid,all=1,timeout=self.timeout)' ../../../lib389/lib389/__init__.py:127: in inner objtype, data = 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = 'self.result2(msgid,all,timeout)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = 'self.result3(msgid,all,timeout)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop
)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f537f90e0>' func = '<built-in' method result4 of LDAP object at '0x7f6f54393198>' args = '(37,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'INSUFFICIENT_ACCESS({'\''desc'\'':' ''\''Insufficient' 'access'\''},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E INSUFFICIENT_ACCESS: '{'\''desc'\'':' ''\''Insufficient' 'access'\''}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: INSUFFICIENT_ACCESS ----------------------------- Captured stderr call ----------------------------- INFO:lib389:Bind as cn=bind_entry, dc=example,dc=com INFO:lib389: '#########################' MODIFY '######################' INFO:lib389:Try to modify cn=test_entry, dc=example,dc=com '(aci' is 'missing)' INFO:lib389:Exception '(expected):' INSUFFICIENT_ACCESS INFO:lib389:Bind as cn=Directory Manager and add the WRITE SELFDN aci INFO:lib389:M1: Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:M1: Try to modify cn=test_entry, dc=example,dc=com. It should succeeds ____________________________ test_ticket47669_init _____________________________ topology = '<tickets.ticket47669_test.TopologyStandalone' object at '0x7f6f538de490>' def 'test_ticket47669_init(topology):' '"""' Add cn=changelog5,cn=config Enable cn=Retro Changelog Plugin,cn=plugins,cn=config '"""' 'log.info('\''Testing'
 Ticket 47669 - Test duration syntax in the 'changelogs'\'')' '#' bind as directory manager 'topology.standalone.log.info("Bind' as '%s"' % 'DN_DM)' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' try: changelogdir = '"%s/changelog"' % topology.standalone.dbdir 'topology.standalone.add_s(Entry((CHANGELOG,' '{'\''objectclass'\'':' ''\''top' 'extensibleObject'\''.split(),' ''\''nsslapd-changelogdir'\'':' 'changelogdir})))' except ldap.LDAPError as e: 'log.error('\''Failed' to add \' + CHANGELOG + ''\'':' error \' + 'e.message['\''desc'\''])' assert False try: 'topology.standalone.modify_s(RETROCHANGELOG,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-pluginEnabled'\'',' ''\''on'\'')])' except ldap.LDAPError as e: 'log.error('\''Failed' to enable \' + RETROCHANGELOG + ''\'':' error \' + 'e.message['\''desc'\''])' assert False '#' restart the server '>' 'topology.standalone.restart(timeout=10)' tickets/ticket47669_test.py:103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:1215: in restart 'self.start(timeout)' ../../../lib389/lib389/__init__.py:1096: in start '"dirsrv@%s"' % 'self.serverid])' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = '(['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@standalone'\''],),' kwargs = '{}' retcode = 1, cmd = '['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@standalone'\'']' def 'check_call(*popenargs,' '**kwargs):' '"""Run' command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: 'check_call(["ls",' '"-l"])' '"""' retcode = 'call(*popenargs,' '**kwargs)' if retcode: cmd = 'kwargs.get("args")' if cmd is None: cmd = 'popenargs[0]' '>' raise 'CalledProcessError(retcode,' 'cmd)' E CalledProcessError: Command ''\''['\''/usr/bin/systemctl'\'','
 ''\''start'\'',' ''\''dirsrv@standalone'\'']'\''' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:Testing Ticket 47669 - Test duration syntax in the changelogs INFO:lib389:Bind as cn=Directory Manager Job for dirsrv@standalone.service failed because the control process exited with error code. See '"systemctl' status 'dirsrv@standalone.service"' and '"journalctl' '-xe"' for details. ______________________ test_ticket47669_changelog_maxage _______________________ topology = '<tickets.ticket47669_test.TopologyStandalone' object at '0x7f6f538de490>' def 'test_ticket47669_changelog_maxage(topology):' '"""' Test nsslapd-changelogmaxage in cn=changelog5,cn=config '"""' 'log.info('\''1.' Test nsslapd-changelogmaxage in 'cn=changelog5,cn=config'\'')' '#' bind as directory manager 'topology.standalone.log.info("Bind' as '%s"' % 'DN_DM)' '>' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' tickets/ticket47669_test.py:159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:223: in simple_bind_s resp_type, resp_data, resp_msgid, resp_ctrls = 'self.result3(msgid,all=1,timeout=self.timeout)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,'
 '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f537a7fc8>' func = '<built-in' method result4 of LDAP object at '0x7f6f54487f58>' args = '(13,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:1. Test nsslapd-changelogmaxage in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager ___________________ test_ticket47669_changelog_triminterval ____________________ topology = '<tickets.ticket47669_test.TopologyStandalone' object at '0x7f6f538de490>' def 'test_ticket47669_changelog_triminterval(topology):' '"""' Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config '"""' 'log.info('\''2.' Test nsslapd-changelogtrim-interval in 'cn=changelog5,cn=config'\'')' '#' bind as directory manager 'topology.standalone.log.info("Bind' as '%s"' % 'DN_DM)' '>' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' tickets/ticket47669_test.py:179: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-
packages/ldap/ldapobject.py:222: in simple_bind_s msgid = 'self.simple_bind(who,cred,serverctrls,clientctrls)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return 'self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls))' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f537a7fc8>' func = '<built-in' method simple_bind of LDAP object at '0x7f6f54487f58>' args = '('\''cn=Directory' 'Manager'\'',' ''\''password'\'',' None, 'None),' kwargs = '{}' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:2. Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager _________________ test_ticket47669_changelog_compactdbinterval _________________ topology = '<tickets.ticket47669_test.TopologyStandalone' object at '0x7f6f538de490>' def 'test_ticket47669_changelog_compactdbinterval(topology):' '"""' Test nssl
apd-changelogcompactdb-interval in cn=changelog5,cn=config '"""' 'log.info('\''3.' Test nsslapd-changelogcompactdb-interval in 'cn=changelog5,cn=config'\'')' '#' bind as directory manager 'topology.standalone.log.info("Bind' as '%s"' % 'DN_DM)' '>' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' tickets/ticket47669_test.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:222: in simple_bind_s msgid = 'self.simple_bind(who,cred,serverctrls,clientctrls)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return 'self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls))' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f537a7fc8>' func = '<built-in' method simple_bind of LDAP object at '0x7f6f54487f58>' args = '('\''cn=Directory' 'Manager'\'',' ''\''password'\'',' None, 'None),' kwargs = '{}' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/
python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:3. Test nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager ____________________ test_ticket47669_retrochangelog_maxage ____________________ topology = '<tickets.ticket47669_test.TopologyStandalone' object at '0x7f6f538de490>' def 'test_ticket47669_retrochangelog_maxage(topology):' '"""' Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config '"""' 'log.info('\''4.' Test nsslapd-changelogmaxage in cn=Retro Changelog 'Plugin,cn=plugins,cn=config'\'')' '#' bind as directory manager 'topology.standalone.log.info("Bind' as '%s"' % 'DN_DM)' '>' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' tickets/ticket47669_test.py:219: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:222: in simple_bind_s msgid = 'self.simple_bind(who,cred,serverctrls,clientctrls)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return 'self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls))' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f537a7fc8>' func = '<built-in' method simple_bind of LDAP object at '0x7f6f54487f58>' args = '('\''cn=Directory' 'Manager'\'',' ''\''password'\'',' None, 'None),' kwargs = '{}' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs 
and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:4. Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config INFO:lib389:Bind as cn=Directory Manager ____________________________ test_ticket47823_init _____________________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_init(topology):' '"""' '"""' '#' Enabled the plugins 'topology.standalone.plugins.enable(name=PLUGIN_ATTR_UNIQUENESS)' 'topology.standalone.restart(timeout=120)' 'topology.standalone.add_s(Entry((PROVISIONING_DN,' '{'\''objectclass'\'':' '"top' 'nscontainer".split(),' ''\''cn'\'':' 'PROVISIONING_CN})))' 'topology.standalone.add_s(Entry((ACTIVE_DN,' '{'\''objectclass'\'':' '"top' 'nscontainer".split(),' ''\''cn'\'':' 'ACTIVE_CN})))' 'topology.standalone.add_s(Entry((STAGE_DN,' '{'\''objectclass'\'':' '"top' 'nscontainer".split(),' ''\''cn'\'':' 'STAGE_CN})))' 'topology.standalone.add_s(Entry((DELETE_DN,' '{'\''objectclass'\'':' '"top' 'nscontainer".split(),' ''\''cn'\'':' 'DELETE_CN})))' '>' topology.standalone.errorlog_file = 'open(topology.standalone.errlog,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/var/log/dirsrv/slapd-standalone/error'\''' tickets/ticket47823_test.py:477: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exist
s OK user dirsrv exists ______________________ test_ticket47823_invalid_config_1 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_1(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg0 is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(old):' arg0 is 'missing")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTIVE_DN, subtree_2=None, 'type_config='\''old'\'',' 'across_subtrees=False)' del 'config.data['\''nsslapd-pluginarg0'\'']' '#' replace ''\''cn'\''' uniqueness entry try: 'topology.standalone.delete_s(config.dn)' except ldap.NO_SUCH_OBJECT: pass 'topology.standalone.add_s(config)' 'topology.standalone.getEntry(config.dn,' ldap.SCOPE_BASE, '"(objectclass=nsSlapdPlugin)",' 'ALL_CONFIG_ATTRS)' '#' Check the server did not restart 'topology.standalone.modify_s(DN_CONFIG,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-errorlog-level'\'',' ''\''65536'\'')])' try: '>' 'topology.standalone.restart(timeout=5)' tickets/ticket47823_test.py:636: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:1215: in restart 'self.start(timeout)' ../../../lib389/lib389/__init__.py:1096: in start '"dirsrv@%s"' % 'self.serverid])' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = '(['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@standalone'\''],),' kwargs = '{}' retcode = 1, cmd = '['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@standalone'\'']' def 'check_call(*popenargs,' '**kwargs):' '"""Run' command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for
 the Popen constructor. Example: 'check_call(["ls",' '"-l"])' '"""' retcode = 'call(*popenargs,' '**kwargs)' if retcode: cmd = 'kwargs.get("args")' if cmd is None: cmd = 'popenargs[0]' '>' raise 'CalledProcessError(retcode,' 'cmd)' E CalledProcessError: Command ''\''['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@standalone'\'']'\''' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(old):' arg0 is missing INFO:lib389:####### INFO:lib389:############################################### Job for dirsrv@standalone.service failed because the control process exited with error code. See '"systemctl' status 'dirsrv@standalone.service"' and '"journalctl' '-xe"' for details. ______________________ test_ticket47823_invalid_config_2 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_2(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg1 is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(old):' arg1 is 'missing")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTIVE_DN, subtree_2=None, 'type_config='\''old'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:672: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1574: in getEntry restype, obj = 'self.result(res)' ../../../lib389/lib389/__init__.py:127: in inner objtype, 
data = 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = 'self.result2(msgid,all,timeout)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = 'self.result3(msgid,all,timeout)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = 'self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method result4 of LDAP object at '0x7f6f54487b20>' args = '(15,' 1, -1, 0, 0, '0),' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call --------
--------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(old):' arg1 is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_3 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_3(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg0 is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(old):' arg0 is missing but new config attrname 'exists")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTIVE_DN, subtree_2=None, 'type_config='\''old'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:723: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1573: in getEntry res = 'self.search(*args,' '**kwargs)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return 'self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method search_ext of LDAP object at '0x7f6f54487b20>' args = '('\''cn=attrib
ute' 'uniqueness,cn=plugins,cn=config'\'',' 0, ''\''(objectclass=nsSlapdPlugin)'\'',' '['\''objectClass'\'',' ''\''cn'\'',' ''\''nsslapd-pluginPath'\'',' ''\''nsslapd-pluginInitfunc'\'',' ''\''nsslapd-pluginType'\'',' ''\''nsslapd-pluginEnabled'\'',' '...],' 0, None, '...)' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(old):' arg0 is missing but new config attrname exists INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_4 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_4(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg1 is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(old):' arg1 is missing but new config 'exist")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTI
VE_DN, subtree_2=None, 'type_config='\''old'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:776: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1573: in getEntry res = 'self.search(*args,' '**kwargs)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return 'self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method search_ext of LDAP object at '0x7f6f54487b20>' args = '('\''cn=attribute' 'uniqueness,cn=plugins,cn=config'\'',' 0, ''\''(objectclass=nsSlapdPlugin)'\'',' '['\''objectClass'\'',' ''\''cn'\'',' ''\''nsslapd-pluginPath'\'',' ''\''nsslapd-pluginInitfunc'\'',' ''\''nsslapd-pluginType'\'',' ''\''nsslapd-pluginEnabled'\'',' '...],' 0, None, '...)' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_s
tack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(old):' arg1 is missing but new config exist INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_5 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_5(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using new config: uniqueness-attribute-name is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(new):' uniqueness-attribute-name is 'missing")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTIVE_DN, subtree_2=None, 'type_config='\''new'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:828: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1573: in getEntry res = 'self.search(*args,' '**kwargs)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return 'self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:5
86: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method search_ext of LDAP object at '0x7f6f54487b20>' args = '('\''cn=attribute' 'uniqueness,cn=plugins,cn=config'\'',' 0, ''\''(objectclass=nsSlapdPlugin)'\'',' '['\''objectClass'\'',' ''\''cn'\'',' ''\''nsslapd-pluginPath'\'',' ''\''nsslapd-pluginInitfunc'\'',' ''\''nsslapd-pluginType'\'',' ''\''nsslapd-pluginEnabled'\'',' '...],' 0, None, '...)' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(new):' uniqueness-attribute-name is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_6 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_6(topology):' ''\'''\'''\''' Check that an invalid config is de
tected. No uniqueness enforced Using new config: uniqueness-subtrees is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(new):' uniqueness-subtrees is 'missing")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' subtree_1=ACTIVE_DN, subtree_2=None, 'type_config='\''new'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:879: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1573: in getEntry res = 'self.search(*args,' '**kwargs)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return 'self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method search_ext of LDAP object at '0x7f6f54487b20>' args = '('\''cn=attribute' 'uniqueness,cn=plugins,cn=config'\'',' 0, ''\''(objectclass=nsSlapdPlugin)'\'',' '['\''objectClass'\'',' ''\''cn'\'',' ''\''nsslapd-pluginPath'\'',' ''\''nsslapd-pluginInitfunc'\'',' ''\''nsslapd-pluginType'\'',' ''\''nsslapd-pluginEnabled'\'',' '...],' 0, None, '...)' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and t
race logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(new):' uniqueness-subtrees is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_7 _______________________ topology = '<tickets.ticket47823_test.TopologyStandalone' object at '0x7f6f54202d90>' def 'test_ticket47823_invalid_config_7(topology):' ''\'''\'''\''' Check that an invalid config is detected. No uniqueness enforced Using new config: uniqueness-subtrees is missing ''\'''\'''\''' '_header(topology,' '"Invalid' config '(new):' uniqueness-subtrees are 'invalid")' '_config_file(topology,' 'action='\''save'\'')' '#' create an invalid config without arg0 '>' config = '_build_config(topology,' 'attr_name='\''cn'\'',' 'subtree_1="this_is' dummy 'DN",' 'subtree_2="an' other=dummy 'DN",' 'type_config='\''new'\'',' 'across_subtrees=False)' tickets/ticket47823_test.py:930: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config config = '_uniqueness_config_entry(topology,' 'attr_name)' tickets/ticket47823_test.py:112: in _uniqueness_config_entry ''\''nsslapd-pluginDescription'\''])' ../../../lib389/lib389/__init__.py:1573: in getEntry res = 'self.search(*args,' '**kwargs)'
 ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return 'self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None)' ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return 'f(*args,' '**kwargs)' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = '<lib389.DirSrv' instance at '0x7f6f541ec830>' func = '<built-in' method search_ext of LDAP object at '0x7f6f54487b20>' args = '('\''cn=attribute' 'uniqueness,cn=plugins,cn=config'\'',' 0, ''\''(objectclass=nsSlapdPlugin)'\'',' '['\''objectClass'\'',' ''\''cn'\'',' ''\''nsslapd-pluginPath'\'',' ''\''nsslapd-pluginInitfunc'\'',' ''\''nsslapd-pluginType'\'',' ''\''nsslapd-pluginEnabled'\'',' '...],' 0, None, '...)' kwargs = '{},' diagnostic_message_success = None e = 'SERVER_DOWN({'\''desc'\'':' '"Can'\''t' contact LDAP 'server"},)' def '_ldap_call(self,func,*args,**kwargs):' '"""' Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs '"""' 'self._ldap_object_lock.acquire()' if __debug__: if 'self._trace_level>=1:' 'self._trace_file.write('\''***' %s %s - '%s\n%s\n'\''' % '(' 'repr(self),' self._uri, ''\''.'\''.join((self.__class__.__name__,func.__name__)),' 'pprint.pformat((args,kwargs))' '))' if 'self._trace_level>=9:' 'traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file)' diagnostic_message_success = None try: try: '>' result = 'func(*args,**kwargs)' E SERVER_DOWN: '{'\''desc'\'':' '"Can'\''t' contact LDAP 'server"}' /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: '###############################################' INFO:lib389:####### INFO:lib389:####### Invalid config '(new):' uniqueness-subtrees are i
nvalid INFO:lib389:####### INFO:lib389:############################################### ____________________________ test_ticket47871_init _____________________________ topology = 'Master[localhost.localdomain:38941]' '->' 'Consumer[localhost.localdomain:38961' def 'test_ticket47871_init(topology):' '"""' Initialize the test environment '"""' 'topology.master.plugins.enable(name=PLUGIN_RETRO_CHANGELOG)' mod = '[(ldap.MOD_REPLACE,' ''\''nsslapd-changelogmaxage'\'',' '"10s"),' '#' 10 second triming '(ldap.MOD_REPLACE,' ''\''nsslapd-changelog-trim-interval'\'',' '"5s")]' 'topology.master.modify_s("cn=%s,%s"' % '(PLUGIN_RETRO_CHANGELOG,' 'DN_PLUGIN),' 'mod)' '#topology.master.plugins.enable(name=PLUGIN_MEMBER_OF)' '#topology.master.plugins.enable(name=PLUGIN_REFER_INTEGRITY)' 'topology.master.stop(timeout=10)' 'topology.master.start(timeout=10)' 'topology.master.log.info("test_ticket47871_init' topology '%r"' % '(topology))' '#' the test case will check if a warning message is logged in the '#' error log of the supplier '>' topology.master.errorlog_file = 'open(topology.master.errlog,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/var/log/dirsrv/slapd-master_1/error'\''' tickets/ticket47871_test.py:147: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists '('\''Update' succeeded: status ''\'',' ''\''0' Total update 'succeeded'\'')' ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}yt2kYjlt1QPsQNaYRzCgX9MO1Ms2i2J0H8dAj6yPxLRw/5jz7Te8Lwik0aRIBrgw+sZQib0kqyWJaUbezyX5TYNg+OzB7CEw' INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl object
Class: top objectClass: person sn: bind dn pseudo user userPassword: '{SSHA512}+Ndoaaf7S7UwQfb77oMhHN7tB3PniBK6EcFSSJkKkyND2plh8cavb8Oin2TM3wLxAsD32ULnPUpesAujXvpLFi8k91GjgwIp' 'DEBUG:tickets.ticket47871_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created INFO:lib389:Starting total init 'cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config INFO:tickets.ticket47871_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:lib389:test_ticket47871_init topology 'Master[localhost.localdomain:38941]' '->' 'Consumer[localhost.localdomain:38961' _______________________________ test_ticket48109 _______________________________ topology = '<tickets.ticket48109_test.TopologyStandalone' object at '0x7f6f537810d0>' def 'test_ticket48109(topology):' ''\'''\'''\''' Set SubStr lengths to cn=uid,cn=index,... objectClass: extensibleObject nsIndexType: sub nsSubStrBegin: 2 nsSubStrEnd: 2 ''\'''\'''\''' 'log.info('\''Test' case '0'\'')' '#' add substr setting to UID_INDEX try: 'topology.standalone.modify_s(UID_INDEX,' '[(ldap.MOD_ADD,' ''\''objectClass'\'',' ''\''extensibleObject'\''),' '(ldap.MOD_ADD,' ''\''nsIndexType'\'',' ''\''sub'\''),' '(ldap.MOD_ADD,' ''\''nsSubStrBegin'\'',' ''\''2'\''),' '(ldap.MOD_ADD,' ''\''nsSubStrEnd'\'',' ''\''2'\'')])' except ldap.LDAPError as e: 'log.error('\''Failed' to add substr lengths: error \' + 'e.message['\''desc'\''])' assert False '#' restart the server to apply the indexing 'topology.standalone.restart(timeout=10)' '#' add a test user UID = ''\''auser0'\''' USER_DN = ''\''uid=%s,%s'\''' % '(UID,' 'SUFFIX)' try: 'topology.standalone.add_s(Entry((USER_DN,' '{' ''\''objectclass'\'':' ''\''top' person organizationalPerson 'inetOrgPerson'\''.split(),' ''\''cn'\'':' ''\''a' 'user0'\'',' ''\''sn'\'':' ''\''user0'\'',' ''\''givenname'\'':' ''\''a'\'',' ''\''mail'\'':' 'UID})))' except ldap.LDAPError as e: 'log.error('\''Failed' to add \' + USER_DN + ''\'':' error \
' + 'e.message['\''desc'\''])' assert False entries = 'topology.standalone.search_s(SUFFIX,' ldap.SCOPE_SUBTREE, ''\''(uid=a*)'\'')' assert 'len(entries)' == 1 '#' restart the server to check the access log 'topology.standalone.restart(timeout=10)' cmdline = ''\''egrep' %s %s '|' egrep '"uid=a\*"'\''' % '(SUFFIX,' 'topology.standalone.accesslog)' p = 'os.popen(cmdline,' '"r")' l0 = 'p.readline()' if l0 == '"":' 'log.error('\''Search' with '"(uid=a*)"' is not logged in \' + 'topology.standalone.accesslog)' '>' assert False E assert False <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48109_test.py>:121: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48109_test:Test case 0 ERROR:tickets.ticket48109_test:Search with '"(uid=a*)"' is not logged in /var/log/dirsrv/slapd-standalone/access ____________________ test_ticket48266_count_csn_evaluation _____________________ topology = '<tickets.ticket48266_test.TopologyReplication' object at '0x7f6f4a5efdd0>' entries = None def 'test_ticket48266_count_csn_evaluation(topology,' 'entries):' ents = 'topology.master1.agreement.list(suffix=SUFFIX)' assert 'len(ents)' == 1 '>' first_csn = '_get_first_not_replicated_csn(topology)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48266_test.py>:328: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = '<tickets.ticket48266_test.TopologyReplication' object at '0x7f6f4a5efdd0>' def '_get_first_not_replicated_csn(topology):' name = '"cn=%s2,%s"' % '(NEW_ACCOUNT,' 'SUFFIX)' '#' read the first CSN that will not be replicated mod = '[(ldap.MOD_REPLACE,' ''\''telephonenumber'\'',' 'str(123456))]' 'topology.master1.modify_s(name,' 'mod)' msgid = 'topology.master1.search_ext(
name,' ldap.SCOPE_SUBTREE, ''\''objectclass=*'\'',' '['\''nscpentrywsi'\''])' rtype, rdata, rmsgid = 'topology.master1.result2(msgid)' attrs = None for dn, raw_attrs in rdata: 'topology.master1.log.info("dn:' '%s"' % 'dn)' if ''\''nscpentrywsi'\''' in raw_attrs: attrs = 'raw_attrs['\''nscpentrywsi'\'']' assert attrs for attr in attrs: if 'attr.lower().startswith('\''telephonenumber'\''):' break assert attr '#' now retrieve the CSN of the operation we are looking for csn = None 'topology.master1.stop(timeout=10)' file_path = 'os.path.join(topology.master1.prefix,' '"var/log/dirsrv/slapd-%s/access"' % 'topology.master1.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-master_1/access'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48266_test.py>:276: IOError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:dn: cn=new_account2,dc=example,dc=com __________________ test_ticket48270_homeDirectory_indexed_cis __________________ topology = '<tickets.ticket48270_test.TopologyStandalone' object at '0x7f6f4a5c4b10>' def 'test_ticket48270_homeDirectory_indexed_cis(topology):' 'log.info("\n\nindex' homeDirectory in caseIgnoreIA5Match and 'caseExactIA5Match")' try: ent = 'topology.standalone.getEntry(HOMEDIRECTORY_INDEX,' 'ldap.SCOPE_BASE)' except ldap.NO_SUCH_OBJECT: 'topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX,' '{' ''\''objectclass'\'':' '"top' 'nsIndex".split(),' ''\''cn'\'':' HOMEDIRECTORY_CN, ''\''nsSystemIndex'\'':' ''\''false'\'',' ''\''nsIndexType'\'':' ''\''eq'\''})))' '#log.info("attach' 'debugger")' '#time.sleep(60)' 'IGNORE_MR_NAME='\''caseIgnoreIA5Match'\''' 'EXACT_MR_NAME='\''caseExactIA5Match'\''' mod = '[(ldap.MOD_REPLACE,' MATCHINGRULE, '(IGNORE_MR_NAME,' 'EXACT_MR_NAME))]' 'topology.standalone.modify_s(HOMEDIRECTORY_INDEX,' 'mod)' '#topology.standalone.stop(timeout=10)' 'log.info("successfully' chec
ked that filter with exact mr , a filter with lowercase eq is 'failing")' '#assert' 'topology.standalone.db2index(bename=DEFAULT_BENAME,' suffixes=None, 'attrs=['\''homeDirectory'\''])' '#topology.standalone.start(timeout=10)' args = '{TASK_WAIT:' 'True}' 'topology.standalone.tasks.reindex(suffix=SUFFIX,' 'attrname='\''homeDirectory'\'',' 'args=args)' 'log.info("Check' indexing succeeded with a specified matching 'rule")' file_path = 'os.path.join(topology.standalone.prefix,' '"var/log/dirsrv/slapd-%s/errors"' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/errors'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48270_test.py>:100: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48270_test: index homeDirectory in caseIgnoreIA5Match and caseExactIA5Match INFO:tickets.ticket48270_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_011943 completed successfully INFO:tickets.ticket48270_test:Check indexing succeeded with a specified matching rule _______________________________ test_ticket48383 _______________________________ topology = '<tickets.ticket48383_test.TopologyStandalone' object at '0x7f6f4ac309d0>' def 'test_ticket48383(topology):' '"""' This test case will check that we re-alloc buffer sizes on import.c We achieve this by setting the servers dbcachesize to a stupid small value and adding huge objects to ds. Then when we run db2index, either: data stress suites tickets tmp If we are not using the re-alloc code, it will FAIL '(Bad)' data stress suites tickets tmp If we re-alloc properly, it all works regardless. '"""' 'topology.standalone.config.set('\''nsslapd-maxbersize'\'',' ''\''200000000'\'')' 'topology.sta
ndalone.restart()' '#' Create some stupid huge objects / attributes in DS. '#' seeAlso is indexed by default. Lets do 'that!' '#' This will take a while ... data = '[random.choice(string.letters)' for x in 'xrange(10000000)]' s = '"".join(data)' '#' This was here for an iteration test. i = 1 USER_DN = ''\''uid=user%s,ou=people,%s'\''' % '(i,' 'DEFAULT_SUFFIX)' padding = '['\''%s'\''' % n for n in 'range(400)]' user = 'Entry((USER_DN,' '{' ''\''objectclass'\'':' ''\''top' posixAccount person 'extensibleObject'\''.split(),' ''\''uid'\'':' ''\''user%s'\''' % '(i),' ''\''cn'\'':' ''\''user%s'\''' % '(i),' ''\''uidNumber'\'':' ''\''%s'\''' % '(i),' ''\''gidNumber'\'':' ''\''%s'\''' % '(i),' ''\''homeDirectory'\'':' ''\''/home/user%s'\''' % '(i),' ''\''description'\'':' ''\''user' 'description'\'',' ''\''sn'\''' : s , ''\''padding'\''' : padding , '}))' try: 'topology.standalone.add_s(user)' except ldap.LDAPError as e: 'log.fatal('\''test' 48383: Failed to user%s: error %s \' % '(i,' 'e.message['\''desc'\'']))' assert False '#' Set the dbsize really low. try: 'topology.standalone.modify_s(DEFAULT_BENAME,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-cachememsize'\'',' ''\''1'\'')])' except ldap.LDAPError as e: 'log.fatal('\''Failed' to change nsslapd-cachememsize \' + 'e.message['\''desc'\''])' '##' Does ds try and set a minimum possible value for 'this?' '##' Yes: '[16/Feb/2016:16:39:18' '+1000]' - WARNING: cache too small, increasing to 500K bytes '#' Given the formula, by default, this means DS will make the buffsize 400k '#' So an object with a 1MB attribute should break indexing '#' stop the server 'topology.standalone.stop(timeout=30)' '#' Now export and import the DB. 'It'\''s' easier than db2index ... 'topology.standalone.db2ldif(bename=DEFAULT_BENAME,' 'suffixes=[DEFAULT_SUFFIX],' 'excludeSuffixes=[],' encrypt=False, '\' repl_data=True, 'outputfile='\''%s/ldif/%s.ldif'\''' % '(topology.standalone.dbdir,SERVERID_STANDALONE' '))' result = 'topology.standalone.ldif2db(DEFAULT_BENAME,' None, None, False, ''\''%s/ldif/%s.ldi
f'\''' % '(topology.standalone.dbdir,SERVERID_STANDALONE' '))' '>' 'assert(result)' E assert False <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48383_test.py>:123: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stdout call ----------------------------- OK group dirsrv exists OK user dirsrv exists Exported ldif file: /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- CRITICAL:tickets.ticket48383_test:Failed to change nsslapd-cachememsize No such object INFO:lib389:Running script: /usr/sbin/db2ldif -Z standalone -n userRoot -s dc=example,dc=com -a /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif -r '[27/Oct/2016:01:29:23.637729301' '+0200]' - DEBUG - ldbm_back_start - userRoot: entry cache size: 10485760 'B;' db size: 10321920 B '[27/Oct/2016:01:29:23.640767399' '+0200]' - DEBUG - ldbm_back_start - total cache size: 20971520 'B;' '[27/Oct/2016:01:29:23.642689006' '+0200]' - DEBUG - ldbm_back_start - Total entry cache size: 20971520 'B;' dbcache size: 10000000 'B;' available memory size: 2154676224 'B;' '[27/Oct/2016:01:29:23.654871919' '+0200]' - NOTICE - dblayer_start - Detected Disorderly Shutdown last time Directory Server was running, recovering database. ldiffile: /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif '[27/Oct/2016:01:29:24.333710130' '+0200]' - ERR - ldbm_back_ldbm2ldif - db2ldif: 'can'\''t' open /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif: 2 '(No' such file or 'directory)' '[27/Oct/2016:01:29:24.375901389' '+0200]' - INFO - dblayer_pre_close - Waiting for 4 database threads to stop '[27/Oct/2016:01:29:25.297550610' '+0200]' - INFO - dblayer_pre_close - All database threads now stopped ERROR:lib389:ldif2db: 'Can'\''t' find file: /var/lib
/dirsrv/slapd-standalone/db/ldif/standalone.ldif ___________________ test_ticket48497_homeDirectory_index_run ___________________ topology = '<tickets.ticket48497_test.TopologyStandalone' object at '0x7f6f4af9b090>' def 'test_ticket48497_homeDirectory_index_run(topology):' args = '{TASK_WAIT:' 'True}' 'topology.standalone.tasks.reindex(suffix=SUFFIX,' 'attrname='\''homeDirectory'\'',' 'args=args)' 'log.info("Check' indexing succeeded with a specified matching 'rule")' file_path = 'os.path.join(topology.standalone.prefix,' '"var/log/dirsrv/slapd-%s/errors"' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/errors'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48497_test.py>:139: IOError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_012950 completed successfully INFO:tickets.ticket48497_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48745_homeDirectory_indexed_cis __________________ topology = '<tickets.ticket48745_test.TopologyStandalone' object at '0x7f6f4ac2b9d0>' def 'test_ticket48745_homeDirectory_indexed_cis(topology):' 'log.info("\n\nindex' homeDirectory in caseIgnoreIA5Match and 'caseExactIA5Match")' try: ent = 'topology.standalone.getEntry(HOMEDIRECTORY_INDEX,' 'ldap.SCOPE_BASE)' except ldap.NO_SUCH_OBJECT: 'topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX,' '{' ''\''objectclass'\'':' '"top' 'nsIndex".split(),' ''\''cn'\'':' HOMEDIRECTORY_CN, ''\''nsSystemIndex'\'':' ''\''false'\'',' ''\''nsIndexType'\'':' ''\''eq'\''})))' '#log.info("attach' 'debugger")' '#time.sleep(60)' 'IGNORE_MR_NAME='\''caseIgnoreIA5Match'\''' 'EXACT_MR_NAME='\''caseExactIA5Match'\''' mod = '[(ldap.MOD_REPLACE,' MATCHINGRULE, '(IGNORE_MR_NAME,' 'EXACT_MR_N
AME))]' 'topology.standalone.modify_s(HOMEDIRECTORY_INDEX,' 'mod)' '#topology.standalone.stop(timeout=10)' 'log.info("successfully' checked that filter with exact mr , a filter with lowercase eq is 'failing")' '#assert' 'topology.standalone.db2index(bename=DEFAULT_BENAME,' suffixes=None, 'attrs=['\''homeDirectory'\''])' '#topology.standalone.start(timeout=10)' args = '{TASK_WAIT:' 'True}' 'topology.standalone.tasks.reindex(suffix=SUFFIX,' 'attrname='\''homeDirectory'\'',' 'args=args)' 'log.info("Check' indexing succeeded with a specified matching 'rule")' file_path = 'os.path.join(topology.standalone.prefix,' '"var/log/dirsrv/slapd-%s/errors"' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/errors'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48745_test.py>:110: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48745_test: index homeDirectory in caseIgnoreIA5Match and caseExactIA5Match INFO:tickets.ticket48745_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013109 completed successfully INFO:tickets.ticket48745_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48746_homeDirectory_indexed_cis __________________ topology = '<tickets.ticket48746_test.TopologyStandalone' object at '0x7f6f53113350>' def 'test_ticket48746_homeDirectory_indexed_cis(topology):' 'log.info("\n\nindex' homeDirectory in caseIgnoreIA5Match and 'caseExactIA5Match")' try: ent = 'topology.standalone.getEntry(HOMEDIRECTORY_INDEX,' 'ldap.SCOPE_BASE)' except ldap.NO_SUCH_OBJECT: 'topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX,' '{' ''\''objectclass'\'':' '"top' 'nsIndex".split(),' ''\''cn'\'':' HOMEDIRE
CTORY_CN, ''\''nsSystemIndex'\'':' ''\''false'\'',' ''\''nsIndexType'\'':' ''\''eq'\''})))' '#log.info("attach' 'debugger")' '#time.sleep(60)' 'IGNORE_MR_NAME='\''caseIgnoreIA5Match'\''' 'EXACT_MR_NAME='\''caseExactIA5Match'\''' mod = '[(ldap.MOD_REPLACE,' MATCHINGRULE, '(IGNORE_MR_NAME,' 'EXACT_MR_NAME))]' 'topology.standalone.modify_s(HOMEDIRECTORY_INDEX,' 'mod)' '#topology.standalone.stop(timeout=10)' 'log.info("successfully' checked that filter with exact mr , a filter with lowercase eq is 'failing")' '#assert' 'topology.standalone.db2index(bename=DEFAULT_BENAME,' suffixes=None, 'attrs=['\''homeDirectory'\''])' '#topology.standalone.start(timeout=10)' args = '{TASK_WAIT:' 'True}' 'topology.standalone.tasks.reindex(suffix=SUFFIX,' 'attrname='\''homeDirectory'\'',' 'args=args)' 'log.info("Check' indexing succeeded with a specified matching 'rule")' file_path = 'os.path.join(topology.standalone.prefix,' '"var/log/dirsrv/slapd-%s/errors"' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/errors'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48746_test.py>:108: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48746_test: index homeDirectory in caseIgnoreIA5Match and caseExactIA5Match INFO:tickets.ticket48746_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013134 completed successfully INFO:tickets.ticket48746_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48746_homeDirectory_indexed_ces __________________ topology = '<tickets.ticket48746_test.TopologyStandalone' object at '0x7f6f53113350>' def 'test_ticket48746_homeDirectory_indexed_ces(topology):' 'log.info("\n\nindex' hom
eDirectory in caseExactIA5Match, this would trigger the 'crash")' try: ent = 'topology.standalone.getEntry(HOMEDIRECTORY_INDEX,' 'ldap.SCOPE_BASE)' except ldap.NO_SUCH_OBJECT: 'topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX,' '{' ''\''objectclass'\'':' '"top' 'nsIndex".split(),' ''\''cn'\'':' HOMEDIRECTORY_CN, ''\''nsSystemIndex'\'':' ''\''false'\'',' ''\''nsIndexType'\'':' ''\''eq'\''})))' '#' 'log.info("attach' 'debugger")' '#' 'time.sleep(60)' 'EXACT_MR_NAME='\''caseExactIA5Match'\''' mod = '[(ldap.MOD_REPLACE,' MATCHINGRULE, '(EXACT_MR_NAME))]' 'topology.standalone.modify_s(HOMEDIRECTORY_INDEX,' 'mod)' '#topology.standalone.stop(timeout=10)' 'log.info("successfully' checked that filter with exact mr , a filter with lowercase eq is 'failing")' '#assert' 'topology.standalone.db2index(bename=DEFAULT_BENAME,' suffixes=None, 'attrs=['\''homeDirectory'\''])' '#topology.standalone.start(timeout=10)' args = '{TASK_WAIT:' 'True}' 'topology.standalone.tasks.reindex(suffix=SUFFIX,' 'attrname='\''homeDirectory'\'',' 'args=args)' 'log.info("Check' indexing succeeded with a specified matching 'rule")' file_path = 'os.path.join(topology.standalone.prefix,' '"var/log/dirsrv/slapd-%s/errors"' % 'topology.standalone.serverid)' '>' file_obj = 'open(file_path,' '"r")' E IOError: '[Errno' '2]' No such file or directory: ''\''/usr/var/log/dirsrv/slapd-standalone/errors'\''' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48746_test.py>:172: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48746_test: index homeDirectory in caseExactIA5Match, this would trigger the crash INFO:tickets.ticket48746_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013136 completed successfully INFO:tickets.ticket48746_test:Check indexing succeeded with a specified matchin
g rule _____________________ test_ticket48906_dblock_ldap_update ______________________ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' def 'test_ticket48906_dblock_ldap_update(topology):' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###' Check that after ldap 'update'\'')' 'topology.standalone.log.info('\''###' - monitor contains 'DEFAULT'\'')' 'topology.standalone.log.info('\''###' - configured contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###' - After stop dse.ldif contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###' - After stop guardian contains 'DEFAULT'\'')' 'topology.standalone.log.info('\''###' In fact guardian should differ from config to recreate the 'env'\'')' 'topology.standalone.log.info('\''###' Check that after restart '(DBenv' 'recreated)'\'')' 'topology.standalone.log.info('\''###' - monitor contains DBLOCK_LDAP_UPDATE ''\'')' 'topology.standalone.log.info('\''###' - configured contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###' - dse.ldif contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.modify_s(ldbm_config,' '[(ldap.MOD_REPLACE,' DBLOCK_ATTR_CONFIG, 'DBLOCK_LDAP_UPDATE)])' '_check_monitored_value(topology,' 'DBLOCK_DEFAULT)' '_check_configured_value(topology,' attr=DBLOCK_ATTR_CONFIG, expected_value=DBLOCK_LDAP_UPDATE, 'required=True)' 'topology.standalone.stop(timeout=10)' '_check_dse_ldif_value(topology,' attr=DBLOCK_ATTR_CONFIG, 'expected_value=DBLOCK_LDAP_UPDATE)' '>' '_check_guardian_value(topology,' attr=DBLOCK_ATTR_GUARDIAN, 'expected_value=DBLOCK_DEFAULT)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:218: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' attr = ''\''locks'\'',' expected_value = ''\''10000'\''' def '_check_guardian_value(topology,' attr=DBLOCK_ATTR_CONFIG, 'expected_value=None):' guardian_file = topology.standalone.dbdir + ''\''/db/guardian'\''' '>' 'assert(os.path.exists(guardian_file))' E assert '<function' exists at '0x7f6f64107050>('\''/var/lib/dirsrv/slapd-standalone/db/db/guardian'\'')' E + where '<function' exists at '0x7f6f64107050>' = '<module' ''\''posixpath'\''' from ''\''/usr/lib64/python2.7/posixpath.pyc'\''>.exists' E + where '<module' ''\''posixpath'\''' from ''\''/usr/lib64/python2.7/posixpath.pyc'\''>' = os.path <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:164: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that after ldap update INFO:lib389:### - monitor contains DEFAULT INFO:lib389:### - configured contains DBLOCK_LDAP_UPDATE INFO:lib389:### - After stop dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### - After stop guardian contains DEFAULT INFO:lib389:### In fact guardian should differ from config to recreate the env INFO:lib389:### Check that after restart '(DBenv' 'recreated)' INFO:lib389:### - monitor contains DBLOCK_LDAP_UPDATE INFO:lib389:### - configured contains DBLOCK_LDAP_UPDATE INFO:lib389:### - dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### INFO:lib389:################################### _____________________ test_ticket48906_dblock_edit_update ______________________ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' def 'test_ticket48906_dblock_edit_update(topology):' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###' Check that after 'stop'\
'')' 'topology.standalone.log.info('\''###' - dse.ldif contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###' - guardian contains 'DBLOCK_LDAP_UPDATE'\'')' 'topology.standalone.log.info('\''###' Check that edit 'dse+restart'\'')' 'topology.standalone.log.info('\''###' - monitor contains 'DBLOCK_EDIT_UPDATE'\'')' 'topology.standalone.log.info('\''###' - configured contains 'DBLOCK_EDIT_UPDATE'\'')' 'topology.standalone.log.info('\''###' Check that after 'stop'\'')' 'topology.standalone.log.info('\''###' - dse.ldif contains 'DBLOCK_EDIT_UPDATE'\'')' 'topology.standalone.log.info('\''###' - guardian contains 'DBLOCK_EDIT_UPDATE'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.stop(timeout=10)' '_check_dse_ldif_value(topology,' attr=DBLOCK_ATTR_CONFIG, 'expected_value=DBLOCK_LDAP_UPDATE)' '>' '_check_guardian_value(topology,' attr=DBLOCK_ATTR_GUARDIAN, 'expected_value=DBLOCK_LDAP_UPDATE)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:243: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' attr = ''\''locks'\'',' expected_value = ''\''20000'\''' def '_check_guardian_value(topology,' attr=DBLOCK_ATTR_CONFIG, 'expected_value=None):' guardian_file = topology.standalone.dbdir + ''\''/db/guardian'\''' '>' 'assert(os.path.exists(guardian_file))' E assert '<function' exists at '0x7f6f64107050>('\''/var/lib/dirsrv/slapd-standalone/db/db/guardian'\'')' E + where '<function' exists at '0x7f6f64107050>' = '<module' ''\''posixpath'\''' from ''\''/usr/lib64/python2.7/posixpath.pyc'\''>.exists' E + where '<module' ''\''posixpath'\''' from ''\''/usr/lib64/python2.7/posixpath.pyc'\''>' = os.path <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_t
est.py>:164: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that after stop INFO:lib389:### - dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### - guardian contains DBLOCK_LDAP_UPDATE INFO:lib389:### Check that edit dse+restart INFO:lib389:### - monitor contains DBLOCK_EDIT_UPDATE INFO:lib389:### - configured contains DBLOCK_EDIT_UPDATE INFO:lib389:### Check that after stop INFO:lib389:### - dse.ldif contains DBLOCK_EDIT_UPDATE INFO:lib389:### - guardian contains DBLOCK_EDIT_UPDATE INFO:lib389:### INFO:lib389:################################### ________________________ test_ticket48906_dblock_robust ________________________ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' def 'test_ticket48906_dblock_robust(topology):' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###' Check that the following values are 'rejected'\'')' 'topology.standalone.log.info('\''###' - negative 'value'\'')' 'topology.standalone.log.info('\''###' - insuffisant 'value'\'')' 'topology.standalone.log.info('\''###' - invalid 'value'\'')' 'topology.standalone.log.info('\''###' Check that minimum value is 'accepted'\'')' 'topology.standalone.log.info('\''###'\'')' 'topology.standalone.log.info('\''###################################'\'')' 'topology.standalone.start(timeout=10)' '>' '_check_monitored_value(topology,' 'DBLOCK_EDIT_UPDATE)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:291: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = '<tickets.ticket48906_test.TopologyStandalone' object at '0x7f6f5310a310>' expected_value = ''\''40000'\''' def '_check_monitored_value(topology,' 'expected_value):' entries = 'topology.standalone.search_s(ldb
m_monitor,' ldap.SCOPE_BASE, ''\''(objectclass=*)'\'')' '>' 'assert(entries[0].hasValue(DBLOCK_ATTR_MONITOR)' and 'entries[0].getValue(DBLOCK_ATTR_MONITOR)' == 'expected_value)' E assert '(True' and ''\''20000'\''' == ''\''40000'\''' E + where True = '<bound' method Entry.hasValue of dn: cn=database,cn=monitor,cn=ldbm database,cn...pd-db-txn-region-wait-rate: '0\nobjectClass:' 'top\nobjectClass:' 'extensibleObject\n\n>('\''nsslapd-db-configured-locks'\'')' E + where '<bound' method Entry.hasValue of dn: cn=database,cn=monitor,cn=ldbm database,cn...pd-db-txn-region-wait-rate: '0\nobjectClass:' 'top\nobjectClass:' 'extensibleObject\n\n>' = dn: cn=database,cn=monitor,cn=ldbm 'database,cn=plugins,cn=config\ncn:' 'database\n...apd-db-txn-region-wait-rate:' '0\nobjectClass:' 'top\nobjectClass:' 'extensibleObject\n\n.hasValue' E - 20000 E '?' '^' E + 40000 E '?' '^)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:144: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that the following values are rejected INFO:lib389:### - negative value INFO:lib389:### - insuffisant value INFO:lib389:### - invalid value INFO:lib389:### Check that minimum value is accepted INFO:lib389:### INFO:lib389:################################### 'INFO:lib389:open():' Connecting to uri ldap://localhost.localdomain:38931/ 'INFO:lib389:open():' bound as cn=Directory Manager ____________________________ test_range_search_init ____________________________ topology = '<suites.memory_leaks.range_search_test.TopologyStandalone' object at '0x7f6f4ac3b990>' def 'test_range_search_init(topology):' ''\'''\'''\''' Enable retro cl, and valgrind. Since valgrind tests move the ns-slapd binary around 'it'\''s' important to always '"valgrind_disable"' before '"assert' 'False"ing,' otherwise we leave the wrong ns-slapd in place if there is a failure 
''\'''\'''\''' 'log.info('\''Initializing' 'test_range_search...'\'')' 'topology.standalone.plugins.enable(name=PLUGIN_RETRO_CHANGELOG)' '#' First stop the instance 'topology.standalone.stop(timeout=30)' '#' Get the sbin directory so we know where to replace ''\''ns-slapd'\''' sbin_dir = 'get_sbin_dir(prefix=topology.standalone.prefix)' '#' Enable valgrind if not 'topology.standalone.has_asan():' '>' 'valgrind_enable(sbin_dir)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/memory_leaks/range_search_test.py>:86: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ sbin_dir = ''\''/usr/sbin'\''' wrapper = ''\''<http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/ns-slapd.valgrind'\'''> def 'valgrind_enable(sbin_dir,' 'wrapper=None):' ''\'''\'''\''' Copy the valgrind ns-slapd wrapper into the /sbin directory '(making' a backup of the original ns-slapd 'binary).' The script calling 'valgrind_enable()' must be run as the ''\''root'\''' user as selinux needs to be disabled for valgrind to work The server 'instance(s)' should be stopped prior to calling this function. Then after calling 'valgrind_enable():' - Start the server 'instance(s)' with a timeout of 60 '(valgrind' takes a while to 'startup)' - Run the tests - Stop the server - Get the results file - Run 'valgrind_check_file(result_file,' '"pattern",' '"pattern",' '...)' - Run 'valgrind_disable()' :param sbin_dir: the location of the ns-slapd binary '(e.g.' '/usr/sbin)' :param wrapper: The valgrind wrapper script for ns-slapd '(if' not set, a default wrapper is 'used)' :raise IOError: If there is a problem setting up the valgrind scripts :raise EnvironmentError: If script is not run as ''\''root'\''' ''\'''\'''\''' if 'os.geteuid()' '!=' 0: 'log.error('\''This' script must be run as root to use 'valgrind'\'')' raise EnvironmentError if not wrapper: '#' use the default ns-slapd wrapper wrapper = ''\''%s/%s'\''' % '(os.path.dirna
me(os.path.abspath(__file__)),' 'VALGRIND_WRAPPER)' nsslapd_orig = ''\''%s/ns-slapd'\''' % sbin_dir nsslapd_backup = ''\''%s/ns-slapd.original'\''' % sbin_dir if 'os.path.isfile(nsslapd_backup):' '#' There is a backup which means we never cleaned up from a previous '#' 'run(failed' 'test?)' if not 'filecmp.cmp(nsslapd_backup,' 'nsslapd_orig):' '#' Files are different sizes, we assume valgrind is already setup 'log.info('\''Valgrind' is already 'enabled.'\'')' return '#' Check both 'nsslapd'\''s' exist if not 'os.path.isfile(wrapper):' raise 'IOError('\''The' valgrind wrapper '(%s)' does not exist. 'file=%s'\''' % '(wrapper,' '__file__))' if not 'os.path.isfile(nsslapd_orig):' raise 'IOError('\''The' binary '(%s)' does not exist or is not 'accessible.'\''' % 'nsslapd_orig)' '#' Make a backup of the original ns-slapd and copy the wrapper into place try: 'shutil.copy2(nsslapd_orig,' 'nsslapd_backup)' except IOError as e: 'log.fatal('\''valgrind_enable():' failed to backup ns-slapd, error: '%s'\''' % 'e.strerror)' raise 'IOError('\''failed' to backup ns-slapd, error: '%s'\''' % 'e.strerror)' '#' Copy the valgrind wrapper into place try: 'shutil.copy2(wrapper,' 'nsslapd_orig)' except IOError as e: 'log.fatal('\''valgrind_enable():' failed to copy valgrind wrapper \' ''\''to' ns-slapd, error: '%s'\''' % 'e.strerror)' raise 'IOError('\''failed' to copy valgrind wrapper to ns-slapd, error: '%s'\''' % '>' 'e.strerror)' E IOError: failed to copy valgrind wrapper to ns-slapd, error: Text file busy <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/utils.py>:255: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:suites.memory_leaks.range_search_test:Initializing test_range_search... 'CRITICAL:lib389.utils:valgrind_enable():' failed to copy valgrind wrapper to ns-slapd, error: Text file busy _____________________
______ test_multi_suffix_search ___________________________ topology = '<suites.paged_results.paged_results_test.TopologyStandalone' object at '0x7f6f4af61450>' test_user = None, new_suffixes = None def 'test_multi_suffix_search(topology,' test_user, 'new_suffixes):' '"""Verify' that page result search returns empty cookie if there is no returned entry. :Feature: Simple paged results :Setup: Standalone instance, test user for binding, two suffixes with backends, one is inserted into another, 10 users for the search base within each suffix :Steps: 1. Bind as test user 2. Search through all 20 added users with a simple paged control using page_size = 4 3. Wait some time logs to be updated 3. Check access log :Assert: All users should be found, the access log should contain the pr_cookie for each page request and it should be equal 0, except the last one should be equal -1 '"""' search_flt = 'r'\''(uid=test*)'\''' searchreq_attrlist = '['\''dn'\'',' ''\''sn'\'']' page_size = 4 users_num = 20 'log.info('\''Clear' the access 'log'\'')' 'topology.standalone.deleteAccessLogs()' users_list_1 = 'add_users(topology,' users_num / 2, 'NEW_SUFFIX_1)' users_list_2 = 'add_users(topology,' users_num / 2, 'NEW_SUFFIX_2)' try: 'log.info('\''Set' DM 'bind'\'')' 'topology.standalone.simple_bind_s(DN_DM,' 'PASSWORD)' req_ctrl = 'SimplePagedResultsControl(True,' size=page_size, 'cookie='\'''\'')' all_results = 'paged_search(topology,' NEW_SUFFIX_1, '[req_ctrl],' search_flt, 'searchreq_attrlist)' 'log.info('\''{}' 'results'\''.format(len(all_results)))' assert 'len(all_results)' == users_num 'log.info('\''Restart' the server to flush the 'logs'\'')' 'topology.standalone.restart(timeout=10)' access_log_lines = 'topology.standalone.ds_access_log.match('\''.*pr_cookie=.*'\'')' pr_cookie_list = '([line.rsplit('\''='\'',' '1)[-1]' for line in 'access_log_lines])' pr_cookie_list = '[int(pr_cookie)' for pr_cookie in 'pr_cookie_list]' 'log.info('\''Assert' that last pr_cookie == -1 and others pr_cookie == '0'\'')' pr_cookie_zeros = 'list(pr_co
okie' == 0 for pr_cookie in 'pr_cookie_list[0:-1])' assert 'all(pr_cookie_zeros)' '>' assert 'pr_cookie_list[-1]' == -1 E IndexError: list index out of range <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/paged_results/paged_results_test.py>:1198: IndexError ---------------------------- Captured stderr setup ----------------------------- INFO:suites.paged_results.paged_results_test:Adding suffix:o=test_parent and backend: parent_base INFO:lib389:List backend with suffix=o=test_parent INFO:lib389:Creating a local backend INFO:lib389:List backend cn=parent_base,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=parent_base,cn=ldbm database,cn=plugins,cn=config cn: parent_base nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/parent_base nsslapd-dncachememsize: 10485760 nsslapd-readonly: off nsslapd-require-index: off nsslapd-suffix: o=test_parent objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance INFO:lib389:Entry dn: 'cn="o=test_parent",cn=mapping' tree,cn=config cn: o=test_parent nsslapd-backend: parent_base nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree INFO:lib389:Found entry dn: 'cn=o\3Dtest_parent,cn=mapping' tree,cn=config cn: o=test_parent nsslapd-backend: parent_base nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree INFO:suites.paged_results.paged_results_test:Adding suffix:ou=child,o=test_parent and backend: child_base INFO:lib389:List backend with suffix=ou=child,o=test_parent INFO:lib389:Creating a local backend INFO:lib389:List backend cn=child_base,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=child_base,cn=ldbm database,cn=plugins,cn=config cn: child_base nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/child_base nsslapd-dncachememsize: 10485760 nsslapd-reado
nly: off nsslapd-require-index: off nsslapd-suffix: ou=child,o=test_parent objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance INFO:lib389:Entry dn: 'cn="ou=child,o=test_parent",cn=mapping' tree,cn=config cn: ou=child,o=test_parent nsslapd-backend: child_base nsslapd-parent-suffix: o=test_parent nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree INFO:lib389:Found entry dn: 'cn=ou\3Dchild\2Co\3Dtest_parent,cn=mapping' tree,cn=config cn: ou=child,o=test_parent nsslapd-backend: child_base nsslapd-parent-suffix: o=test_parent nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree INFO:suites.paged_results.paged_results_test:Adding ACI to allow our test user to search ----------------------------- Captured stderr call ----------------------------- INFO:suites.paged_results.paged_results_test:Clear the access log INFO:suites.paged_results.paged_results_test:Adding 10 users INFO:suites.paged_results.paged_results_test:Adding 10 users INFO:suites.paged_results.paged_results_test:Set DM bind INFO:suites.paged_results.paged_results_test:Running simple paged result search with - search suffix: 'o=test_parent;' filter: '(uid=test*);' attr list '['\''dn'\'',' ''\''sn'\''];' page_size = '4;' controls: '[<ldap.controls.libldap.SimplePagedResultsControl' instance at '0x7f6f4af4aef0>].' INFO:suites.paged_results.paged_results_test:Getting page 0 INFO:suites.paged_results.paged_results_test:Getting page 1 INFO:suites.paged_results.paged_results_test:Getting page 2 INFO:suites.paged_results.paged_results_test:Getting page 3 INFO:suites.paged_results.paged_results_test:Getting page 4 INFO:suites.paged_results.paged_results_test:Getting page 5 INFO:suites.paged_results.paged_results_test:20 results INFO:suites.paged_results.paged_results_test:Restart the server to flush the logs INFO:suites.paged_results.paged_results_test:Assert that last pr_cookie == -1 and others pr_cookie == 0 INFO:suites.paged_results.paged_results_t
est:Remove added users INFO:suites.paged_results.paged_results_test:Deleting 10 users INFO:suites.paged_results.paged_results_test:Deleting 10 users ________________________ test_cleanallruv_stress_clean _________________________ topology = '<suites.replication.cleanallruv_test.TopologyReplication' object at '0x7f6f4bf19190>' def 'test_cleanallruv_stress_clean(topology):' ''\'''\'''\''' Put each 'server(m1' - 'm4)' under stress, and perform the entire clean process ''\'''\'''\''' 'log.info('\''Running' 'test_cleanallruv_stress_clean...'\'')' 'log.info('\''test_cleanallruv_stress_clean:' put all the masters under 'load...'\'')' '#' Put all the masters under load m1_add_users = 'AddUsers(topology.master1,' '2000)' 'm1_add_users.start()' m2_add_users = 'AddUsers(topology.master2,' '2000)' 'm2_add_users.start()' m3_add_users = 'AddUsers(topology.master3,' '2000)' 'm3_add_users.start()' m4_add_users = 'AddUsers(topology.master4,' '2000)' 'm4_add_users.start()' '#' Allow sometime to get replication flowing in all directions 'log.info('\''test_cleanallruv_stress_clean:' allow some time for replication to get 'flowing...'\'')' 'time.sleep(5)' '#' Put master 4 into read only mode 'log.info('\''test_cleanallruv_stress_clean:' put master 4 into read-only 'mode...'\'')' try: 'topology.master4.modify_s(DN_CONFIG,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-readonly'\'',' ''\''on'\'')])' except ldap.LDAPError as e: 'log.fatal('\''test_cleanallruv_stress_clean:' Failed to put master 4 into read-only mode: error \' + 'e.message['\''desc'\''])' assert False '#' We need to wait for master 4 to push its changes out 'log.info('\''test_cleanallruv_stress_clean:' allow some time for master 4 to push changes out '(60' 'seconds)...'\'')' 'time.sleep(60)' '#' Disable master 4 'log.info('\''test_cleanallruv_stress_clean:' disable replication on master '4...'\'')' try: 'topology.master4.replica.disableReplication(DEFAULT_SUFFIX)' except: 'log.fatal('\''test_cleanallruv_stress_clean:' failed to diable 'replication'\'')' assert False '#' Remove the 
agreements from the other masters that point to master 4 'remove_master4_agmts("test_cleanallruv_stress_clean",' 'topology)' '#' Run the task 'log.info('\''test_cleanallruv_stress_clean:' Run the cleanAllRUV 'task...'\'')' try: 'topology.master1.tasks.cleanAllRUV(suffix=DEFAULT_SUFFIX,' 'replicaid='\''4'\'',' 'args={TASK_WAIT:' 'True})' except ValueError as e: 'log.fatal('\''test_cleanallruv_stress_clean:' Problem running cleanAllRuv task: \' + 'e.message('\''desc'\''))' assert False '#' Wait for the update to finish 'log.info('\''test_cleanallruv_stress_clean:' wait for all the updates to 'finish...'\'')' 'm1_add_users.join()' 'm2_add_users.join()' 'm3_add_users.join()' 'm4_add_users.join()' '#' Check the other 'master'\''s' RUV for ''\''replica' '4'\''' 'log.info('\''test_cleanallruv_stress_clean:' check if all the replicas have been 'cleaned...'\'')' clean = 'check_ruvs("test_cleanallruv_stress_clean",' 'topology)' if not clean: 'log.fatal('\''test_cleanallruv_stress_clean:' Failed to clean 'replicas'\'')' assert False 'log.info('\''test_cleanallruv_stress_clean:' PASSED, restoring master '4...'\'')' '#' '#' Cleanup - restore master 4 '#' '#' Sleep for a bit to replication complete 'log.info("Sleep' for 120 seconds to allow replication to 'complete...")' 'time.sleep(120)' '#' Turn off readonly mode try: 'topology.master4.modify_s(DN_CONFIG,' '[(ldap.MOD_REPLACE,' ''\''nsslapd-readonly'\'',' ''\''off'\'')])' except ldap.LDAPError as e: 'log.fatal('\''test_cleanallruv_stress_clean:' Failed to put master 4 into read-only mode: error \' + 'e.message['\''desc'\''])' assert False '>' 'restore_master4(topology)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:1208: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:571: in restore_master4 'topology.ma
ster2.start(timeout=30)' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1096: in start '"dirsrv@%s"' % 'self.serverid])' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = '(['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@master_2'\''],),' kwargs = '{}' retcode = 1, cmd = '['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@master_2'\'']' def 'check_call(*popenargs,' '**kwargs):' '"""Run' command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: 'check_call(["ls",' '"-l"])' '"""' retcode = 'call(*popenargs,' '**kwargs)' if retcode: cmd = 'kwargs.get("args")' if cmd is None: cmd = 'popenargs[0]' '>' raise 'CalledProcessError(retcode,' 'cmd)' E CalledProcessError: Command ''\''['\''/usr/bin/systemctl'\'',' ''\''start'\'',' ''\''dirsrv@master_2'\'']'\''' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ----------------------------- Captured stderr call ----------------------------- INFO:suites.replication.cleanallruv_test:Running test_cleanallruv_stress_clean... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put all the masters under load... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for replication to get flowing... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put master 4 into read-only mode... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for master 4 to push changes out '(60' 'seconds)...' INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: disable replication on master 4... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: remove all the agreements to ma
ster 4... INFO:lib389:Agreement '(cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' 'tree,cn=config)' was successfully removed INFO:lib389:Agreement '(cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' 'tree,cn=config)' was successfully removed INFO:lib389:Agreement '(cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' 'tree,cn=config)' was successfully removed INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Run the cleanAllRUV task... INFO:lib389:cleanAllRUV task '(task-10272016_023156)' completed successfully INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: wait for all the updates to finish... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: check if all the replicas have been cleaned... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 1 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 2 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 3 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: PASSED, restoring master 4... INFO:suites.replication.cleanallruv_test:Sleep for 120 seconds to allow replication to complete... INFO:suites.replication.cleanallruv_test:Restoring master 4... INFO:lib389:List backend with suffix=dc=example,dc=com WARNING:lib389:entry cn=changelog5,cn=config already exists 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhos
t.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created 'DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping' tree,cn=config created Job for dirsrv@master_2.service failed because a fatal signal was delivered causing the control process to dump core. See '"systemctl' status 'dirsrv@master_2.service"' and '"journalctl' '-xe"' for details. ============== 35 failed, 481 passed, 5 error in 8092.80 seconds ===============
============================= test session starts ============================== platform linux2 -- Python 2.7.12, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python2 cachedir: .cache rootdir: <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests,> inifile: plugins: sourceorder-0.5, multihost-1.0 collecting ... collected 520 items tickets/ticket1347760_test.py::test_ticket1347760 FAILED tickets/ticket365_test.py::test_ticket365 PASSED tickets/ticket47313_test.py::test_ticket47313_run PASSED tickets/ticket47384_test.py::test_ticket47384 PASSED tickets/ticket47431_test.py::test_ticket47431_0 PASSED tickets/ticket47431_test.py::test_ticket47431_1 FAILED tickets/ticket47431_test.py::test_ticket47431_2 PASSED tickets/ticket47431_test.py::test_ticket47431_3 PASSED tickets/ticket47462_test.py::test_ticket47462 FAILED tickets/ticket47490_test.py::test_ticket47490_init PASSED tickets/ticket47490_test.py::test_ticket47490_one PASSED tickets/ticket47490_test.py::test_ticket47490_two PASSED tickets/ticket47490_test.py::test_ticket47490_three PASSED tickets/ticket47490_test.py::test_ticket47490_four PASSED tickets/ticket47490_test.py::test_ticket47490_five PASSED tickets/ticket47490_test.py::test_ticket47490_six PASSED tickets/ticket47490_test.py::test_ticket47490_seven PASSED tickets/ticket47490_test.py::test_ticket47490_eight PASSED tickets/ticket47490_test.py::test_ticket47490_nine PASSED tickets/ticket47536_test.py::test_ticket47536 FAILED tickets/ticket47553_test.py::test_ticket47553 PASSED tickets/ticket47560_test.py::test_ticket47560 PASSED tickets/ticket47573_test.py::test_ticket47573_init PASSED tickets/ticket47573_test.py::test_ticket47573_one PASSED tickets/ticket47573_test.py::test_ticket47573_two PASSED tickets/ticket47573_test.py::test_ticket47573_three PASSED tickets/ticket47619_test.py::test_ticket47619_init FAILED tickets/ticket47619_test.py::test_ticket47619_create_index PASSED tickets/ticket47619_test.py::test_ticket47619_reindex PASSED tickets/ticket
47619_test.py::test_ticket47619_check_indexed_search PASSED tickets/ticket47640_test.py::test_ticket47640 PASSED tickets/ticket47653MMR_test.py::test_ticket47653_init PASSED tickets/ticket47653MMR_test.py::test_ticket47653_add FAILED tickets/ticket47653MMR_test.py::test_ticket47653_modify FAILED tickets/ticket47653_test.py::test_ticket47653_init PASSED tickets/ticket47653_test.py::test_ticket47653_add PASSED tickets/ticket47653_test.py::test_ticket47653_search PASSED tickets/ticket47653_test.py::test_ticket47653_modify PASSED tickets/ticket47653_test.py::test_ticket47653_delete PASSED tickets/ticket47669_test.py::test_ticket47669_init FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_maxage FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_triminterval FAILED tickets/ticket47669_test.py::test_ticket47669_changelog_compactdbinterval FAILED tickets/ticket47669_test.py::test_ticket47669_retrochangelog_maxage FAILED tickets/ticket47676_test.py::test_ticket47676_init PASSED tickets/ticket47676_test.py::test_ticket47676_skip_oc_at PASSED tickets/ticket47676_test.py::test_ticket47676_reject_action PASSED tickets/ticket47714_test.py::test_ticket47714_init PASSED tickets/ticket47714_test.py::test_ticket47714_run_0 PASSED tickets/ticket47714_test.py::test_ticket47714_run_1 PASSED tickets/ticket47721_test.py::test_ticket47721_init PASSED tickets/ticket47721_test.py::test_ticket47721_0 PASSED tickets/ticket47721_test.py::test_ticket47721_1 PASSED tickets/ticket47721_test.py::test_ticket47721_2 PASSED tickets/ticket47721_test.py::test_ticket47721_3 PASSED tickets/ticket47721_test.py::test_ticket47721_4 PASSED tickets/ticket47781_test.py::test_ticket47781 PASSED tickets/ticket47787_test.py::test_ticket47787_init PASSED tickets/ticket47787_test.py::test_ticket47787_2 PASSED tickets/ticket47808_test.py::test_ticket47808_run PASSED tickets/ticket47815_test.py::test_ticket47815 PASSED tickets/ticket47819_test.py::test_ticket47819 PASSED tickets/ticket47823_test.py::test_ticket47823_init FAILED tickets/tic
ket47823_test.py::test_ticket47823_one_container_add PASSED tickets/ticket47823_test.py::test_ticket47823_one_container_mod PASSED tickets/ticket47823_test.py::test_ticket47823_one_container_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_add PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_mod PASSED tickets/ticket47823_test.py::test_ticket47823_multi_containers_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_add PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_mod PASSED tickets/ticket47823_test.py::test_ticket47823_across_multi_containers_modrdn PASSED tickets/ticket47823_test.py::test_ticket47823_invalid_config_1 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_2 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_3 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_4 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_5 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_6 FAILED tickets/ticket47823_test.py::test_ticket47823_invalid_config_7 FAILED tickets/ticket47828_test.py::test_ticket47828_init PASSED tickets/ticket47828_test.py::test_ticket47828_run_0 PASSED tickets/ticket47828_test.py::test_ticket47828_run_1 PASSED tickets/ticket47828_test.py::test_ticket47828_run_2 PASSED tickets/ticket47828_test.py::test_ticket47828_run_3 PASSED tickets/ticket47828_test.py::test_ticket47828_run_4 PASSED tickets/ticket47828_test.py::test_ticket47828_run_5 PASSED tickets/ticket47828_test.py::test_ticket47828_run_6 PASSED tickets/ticket47828_test.py::test_ticket47828_run_7 PASSED tickets/ticket47828_test.py::test_ticket47828_run_8 PASSED tickets/ticket47828_test.py::test_ticket47828_run_9 PASSED tickets/ticket47828_test.py::test_ticket47828_run_10 PASSED tickets/ticket47828_test.py::test_ticket47828_run_11 PASSED tickets/ticket47828_test.py::test_ticket47828_run_12 PASSED tickets/ticket47828_test.py::test_ticket47828_run_13 P
ASSED tickets/ticket47828_test.py::test_ticket47828_run_14 PASSED tickets/ticket47828_test.py::test_ticket47828_run_15 PASSED tickets/ticket47828_test.py::test_ticket47828_run_16 PASSED tickets/ticket47828_test.py::test_ticket47828_run_17 PASSED tickets/ticket47828_test.py::test_ticket47828_run_18 PASSED tickets/ticket47828_test.py::test_ticket47828_run_19 PASSED tickets/ticket47828_test.py::test_ticket47828_run_20 PASSED tickets/ticket47828_test.py::test_ticket47828_run_21 PASSED tickets/ticket47828_test.py::test_ticket47828_run_22 PASSED tickets/ticket47828_test.py::test_ticket47828_run_23 PASSED tickets/ticket47828_test.py::test_ticket47828_run_24 PASSED tickets/ticket47828_test.py::test_ticket47828_run_25 PASSED tickets/ticket47828_test.py::test_ticket47828_run_26 PASSED tickets/ticket47828_test.py::test_ticket47828_run_27 PASSED tickets/ticket47828_test.py::test_ticket47828_run_28 PASSED tickets/ticket47828_test.py::test_ticket47828_run_29 PASSED tickets/ticket47828_test.py::test_ticket47828_run_30 PASSED tickets/ticket47828_test.py::test_ticket47828_run_31 PASSED tickets/ticket47829_test.py::test_ticket47829_init PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_2 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_out_user_3 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_active_user_modrdn_out_user_1 PASSED tickets/tic
ket47829_test.py::test_ticket47829_mod_modrdn_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_modrdn_active_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_mod_stage_user_modrdn_stage_user_1 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_1 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_2 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_3 PASSED tickets/ticket47829_test.py::test_ticket47829_indirect_active_group_4 PASSED tickets/ticket47833_test.py::test_ticket47829_init PASSED tickets/ticket47833_test.py::test_ticket47829_mod_stage_user_modrdn_stage_user_1 PASSED tickets/ticket47869MMR_test.py::test_ticket47869_init PASSED tickets/ticket47869MMR_test.py::test_ticket47869_check PASSED tickets/ticket47871_test.py::test_ticket47871_init FAILED tickets/ticket47871_test.py::test_ticket47871_1 PASSED tickets/ticket47871_test.py::test_ticket47871_2 PASSED tickets/ticket47900_test.py::test_ticket47900 PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_positive PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_negative PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_start_end_invalid PASSED tickets/ticket47910_test.py::test_ticket47910_logconv_noaccesslogs PASSED tickets/ticket47920_test.py::test_ticket47920_init PASSED tickets/ticket47920_test.py::test_ticket47920_mod_readentry_ctrl PASSED tickets/ticket47921_test.py::test_ticket47921 PASSED tickets/ticket47927_test.py::test_ticket47927_init PASSED tickets/ticket47927_test.py::test_ticket47927_one PASSED tickets/ticket47927_test.py::test_ticket47927_two PASSED tickets/ticket47927_test.py::test_ticket47927_three PASSED tickets/ticket47927_test.py::test_ticket47927_four PASSED tickets/ticket47927_test.py::test_ticket47927_five PASSED tickets/ticket47927_test.py::test_ticket47927_six PASSED tickets/ticket47931_test.py::test_ticket47931 PASSED tickets/ticket47937_test.py::test_ticket47937 PASSED tickets/tick
et47950_test.py::test_ticket47950 PASSED tickets/ticket47953_test.py::test_ticket47953 PASSED tickets/ticket47963_test.py::test_ticket47963 PASSED tickets/ticket47966_test.py::test_ticket47966 PASSED tickets/ticket47970_test.py::test_ticket47970 PASSED tickets/ticket47973_test.py::test_ticket47973 PASSED tickets/ticket47976_test.py::test_ticket47976_init PASSED tickets/ticket47976_test.py::test_ticket47976_1 PASSED tickets/ticket47976_test.py::test_ticket47976_2 PASSED tickets/ticket47976_test.py::test_ticket47976_3 PASSED tickets/ticket47980_test.py::test_ticket47980 PASSED tickets/ticket47981_test.py::test_ticket47981 PASSED tickets/ticket47988_test.py::test_ticket47988_init PASSED tickets/ticket47988_test.py::test_ticket47988_1 PASSED tickets/ticket47988_test.py::test_ticket47988_2 PASSED tickets/ticket47988_test.py::test_ticket47988_3 PASSED tickets/ticket47988_test.py::test_ticket47988_4 PASSED tickets/ticket47988_test.py::test_ticket47988_5 PASSED tickets/ticket47988_test.py::test_ticket47988_6 PASSED tickets/ticket48005_test.py::test_ticket48005_setup PASSED tickets/ticket48005_test.py::test_ticket48005_memberof PASSED tickets/ticket48005_test.py::test_ticket48005_automember PASSED tickets/ticket48005_test.py::test_ticket48005_syntaxvalidate PASSED tickets/ticket48005_test.py::test_ticket48005_usn PASSED tickets/ticket48005_test.py::test_ticket48005_schemareload PASSED tickets/ticket48013_test.py::test_ticket48013 PASSED tickets/ticket48026_test.py::test_ticket48026 PASSED tickets/ticket48109_test.py::test_ticket48109 FAILED tickets/ticket48170_test.py::test_ticket48170 PASSED tickets/ticket48194_test.py::test_init PASSED tickets/ticket48194_test.py::test_run_0 PASSED tickets/ticket48194_test.py::test_run_1 PASSED tickets/ticket48194_test.py::test_run_2 PASSED tickets/ticket48194_test.py::test_run_3 PASSED tickets/ticket48194_test.py::test_run_4 PASSED tickets/ticket48194_test.py::test_run_5 PASSED tickets/ticket48194_test.py::test_run_6 PASSED tickets/ticket48194_test.py::test_run_7 PASSED tickets/ticket4
8194_test.py::test_run_8 PASSED tickets/ticket48194_test.py::test_run_9 PASSED tickets/ticket48194_test.py::test_run_10 PASSED tickets/ticket48194_test.py::test_run_11 PASSED tickets/ticket48212_test.py::test_ticket48212 PASSED tickets/ticket48214_test.py::test_ticket48214_run PASSED tickets/ticket48226_test.py::test_ticket48226_set_purgedelay PASSED tickets/ticket48226_test.py::test_ticket48226_1 PASSED tickets/ticket48228_test.py::test_ticket48228_test_global_policy PASSED tickets/ticket48228_test.py::test_ticket48228_test_subtree_policy PASSED tickets/ticket48233_test.py::test_ticket48233 PASSED tickets/ticket48234_test.py::test_ticket48234 PASSED tickets/ticket48252_test.py::test_ticket48252_setup PASSED tickets/ticket48252_test.py::test_ticket48252_run_0 PASSED tickets/ticket48252_test.py::test_ticket48252_run_1 PASSED tickets/ticket48265_test.py::test_ticket48265_test PASSED tickets/ticket48266_test.py::test_ticket48266_fractional PASSED tickets/ticket48266_test.py::test_ticket48266_check_repl_desc PASSED tickets/ticket48266_test.py::test_ticket48266_count_csn_evaluation FAILED tickets/ticket48270_test.py::test_ticket48270_init PASSED tickets/ticket48270_test.py::test_ticket48270_homeDirectory_indexed_cis FAILED tickets/ticket48270_test.py::test_ticket48270_homeDirectory_mixed_value PASSED tickets/ticket48270_test.py::test_ticket48270_extensible_search PASSED tickets/ticket48272_test.py::test_ticket48272 PASSED tickets/ticket48294_test.py::test_48294_init PASSED tickets/ticket48294_test.py::test_48294_run_0 PASSED tickets/ticket48294_test.py::test_48294_run_1 PASSED tickets/ticket48294_test.py::test_48294_run_2 PASSED tickets/ticket48295_test.py::test_48295_init PASSED tickets/ticket48295_test.py::test_48295_run PASSED tickets/ticket48312_test.py::test_ticket48312 PASSED tickets/ticket48325_test.py::test_ticket48325 PASSED tickets/ticket48342_test.py::test_ticket4026 ERROR tickets/ticket48354_test.py::test_ticket48354 PASSED tickets/ticket48362_test.py::test_ticket48362 PASSED tickets/ticket48366_test.py::t
est_ticket48366_init PASSED tickets/ticket48366_test.py::test_ticket48366_search_user PASSED tickets/ticket48366_test.py::test_ticket48366_search_dm PASSED tickets/ticket48370_test.py::test_ticket48370 PASSED tickets/ticket48383_test.py::test_ticket48383 FAILED tickets/ticket48497_test.py::test_ticket48497_init PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_mixed_value PASSED tickets/ticket48497_test.py::test_ticket48497_extensible_search PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_index_cfg PASSED tickets/ticket48497_test.py::test_ticket48497_homeDirectory_index_run FAILED tickets/ticket48637_test.py::test_ticket48637 PASSED tickets/ticket48665_test.py::test_ticket48665 PASSED tickets/ticket48745_test.py::test_ticket48745_init PASSED tickets/ticket48745_test.py::test_ticket48745_homeDirectory_indexed_cis FAILED tickets/ticket48745_test.py::test_ticket48745_homeDirectory_mixed_value PASSED tickets/ticket48745_test.py::test_ticket48745_extensible_search_after_index PASSED tickets/ticket48746_test.py::test_ticket48746_init PASSED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_indexed_cis FAILED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_mixed_value PASSED tickets/ticket48746_test.py::test_ticket48746_extensible_search_after_index PASSED tickets/ticket48746_test.py::test_ticket48746_homeDirectory_indexed_ces FAILED tickets/ticket48755_test.py::test_ticket48755 PASSED tickets/ticket48759_test.py::test_ticket48759 PASSED tickets/ticket48784_test.py::test_ticket48784 PASSED tickets/ticket48798_test.py::test_ticket48798 PASSED tickets/ticket48799_test.py::test_ticket48799 PASSED tickets/ticket48808_test.py::test_ticket48808 PASSED tickets/ticket48844_test.py::test_ticket48844_init PASSED tickets/ticket48844_test.py::test_ticket48844_bitwise_on PASSED tickets/ticket48844_test.py::test_ticket48844_bitwise_off PASSED tickets/ticket48891_test.py::test_ticket48891_setup PASSED tickets/ticket48893_test.py::test_ticket48893 PASSED tickets/ticket48896_test
.py::test_ticket48896 PASSED tickets/ticket48906_test.py::test_ticket48906_setup PASSED tickets/ticket48906_test.py::test_ticket48906_dblock_default PASSED tickets/ticket48906_test.py::test_ticket48906_dblock_ldap_update FAILED tickets/ticket48906_test.py::test_ticket48906_dblock_edit_update FAILED tickets/ticket48906_test.py::test_ticket48906_dblock_robust FAILED tickets/ticket48916_test.py::test_ticket48916 PASSED tickets/ticket48956_test.py::test_ticket48956 PASSED tickets/ticket548_test.py::test_ticket548_test_with_no_policy PASSED tickets/ticket548_test.py::test_ticket548_test_global_policy PASSED tickets/ticket548_test.py::test_ticket548_test_subtree_policy PASSED suites/acct_usability_plugin/acct_usability_test.py::test_acct_usability_init PASSED suites/acct_usability_plugin/acct_usability_test.py::test_acct_usability_ PASSED suites/acctpolicy_plugin/acctpolicy_test.py::test_acctpolicy_init PASSED suites/acctpolicy_plugin/acctpolicy_test.py::test_acctpolicy_ PASSED suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[lang-ja] PASSED suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[binary] PASSED suites/acl/acl_test.py::test_aci_attr_subtype_targetattr[phonetic] PASSED suites/acl/acl_test.py::test_mode_default_add_deny PASSED suites/acl/acl_test.py::test_mode_default_delete_deny PASSED suites/acl/acl_test.py::test_moddn_staging_prod[0-cn=staged user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[1-cn=staged user,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[2-cn=staged user,dc=example,dc=com-cn=bad*,dc=example,dc=com-True] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[3-cn=st*,dc=example,dc=com-cn=accounts,dc=example,dc=com-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[4-cn=bad*,dc=example,dc=com-cn=accounts,dc=example,dc=com-True] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[5-cn=st*,dc=example,dc=com-cn=ac*,dc=example,dc=com-False] PA
SSED suites/acl/acl_test.py::test_moddn_staging_prod[6-None-cn=ac*,dc=example,dc=com-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[7-cn=st*,dc=example,dc=com-None-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod[8-None-None-False] PASSED suites/acl/acl_test.py::test_moddn_staging_prod_9 PASSED suites/acl/acl_test.py::test_moddn_prod_staging PASSED suites/acl/acl_test.py::test_check_repl_M2_to_M1 PASSED suites/acl/acl_test.py::test_moddn_staging_prod_except PASSED suites/acl/acl_test.py::test_mode_default_ger_no_moddn PASSED suites/acl/acl_test.py::test_mode_default_ger_with_moddn PASSED suites/acl/acl_test.py::test_mode_switch_default_to_legacy PASSED suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn1 PASSED suites/acl/acl_test.py::test_mode_legacy_ger_no_moddn2 PASSED suites/acl/acl_test.py::test_mode_legacy_ger_with_moddn PASSED suites/acl/acl_test.py::test_rdn_write_get_ger PASSED suites/acl/acl_test.py::test_rdn_write_modrdn_anonymous PASSED suites/attr_encryption/attr_encrypt_test.py::test_attr_encrypt_init PASSED suites/attr_encryption/attr_encrypt_test.py::test_attr_encrypt_ PASSED suites/attr_uniqueness_plugin/attr_uniqueness_test.py::test_attr_uniqueness_init PASSED suites/attr_uniqueness_plugin/attr_uniqueness_test.py::test_attr_uniqueness PASSED suites/automember_plugin/automember_test.py::test_automember_init PASSED suites/automember_plugin/automember_test.py::test_automember_ PASSED suites/basic/basic_test.py::test_basic_ops PASSED suites/basic/basic_test.py::test_basic_import_export PASSED suites/basic/basic_test.py::test_basic_backup PASSED suites/basic/basic_test.py::test_basic_acl PASSED suites/basic/basic_test.py::test_basic_searches PASSED suites/basic/basic_test.py::test_basic_referrals PASSED suites/basic/basic_test.py::test_basic_systemctl PASSED suites/basic/basic_test.py::test_basic_ldapagent PASSED suites/basic/basic_test.py::test_basic_dse PASSED suites/basic/basic_test.py::test_def_rootdse_attr[namingContexts] PASSED suites/basic/basic_test.py::test_def_
rootdse_attr[supportedLDAPVersion] PASSED suites/basic/basic_test.py::test_def_rootdse_attr[supportedControl] PASSED suites/basic/basic_test.py::test_def_rootdse_attr[supportedExtension] PASSED suites/basic/basic_test.py::test_def_rootdse_attr[supportedSASLMechanisms] PASSED suites/basic/basic_test.py::test_def_rootdse_attr[vendorName] PASSED suites/basic/basic_test.py::test_def_rootdse_attr[vendorVersion] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[namingContexts] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedLDAPVersion] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedControl] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedExtension] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[supportedSASLMechanisms] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[vendorName] PASSED suites/basic/basic_test.py::test_mod_def_rootdse_attr[vendorVersion] PASSED suites/betxns/betxn_test.py::test_betxn_init PASSED suites/betxns/betxn_test.py::test_betxt_7bit PASSED suites/betxns/betxn_test.py::test_betxn_attr_uniqueness PASSED suites/betxns/betxn_test.py::test_betxn_memberof PASSED suites/chaining_plugin/chaining_test.py::test_chaining_init PASSED suites/chaining_plugin/chaining_test.py::test_chaining_ PASSED suites/clu/clu_test.py::test_clu_init PASSED suites/clu/clu_test.py::test_clu_pwdhash PASSED suites/clu/db2ldif_test.py::test_db2ldif_init PASSED suites/collation_plugin/collatation_test.py::test_collatation_init PASSED suites/collation_plugin/collatation_test.py::test_collatation_ PASSED suites/config/config_test.py::test_maxbersize_repl ERROR suites/config/config_test.py::test_config_listen_backport_size ERROR suites/config/config_test.py::test_config_deadlock_policy ERROR suites/cos_plugin/cos_test.py::test_cos_init PASSED suites/cos_plugin/cos_test.py::test_cos_ PASSED suites/deref_plugin/deref_test.py::test_deref_init PASSED suites/deref_plugin/deref_test.py::test_deref_ PASSED suites/disk_monitoring/disk_mon
itor_test.py::test_disk_monitor_init PASSED suites/disk_monitoring/disk_monitor_test.py::test_disk_monitor_ PASSED suites/distrib_plugin/distrib_test.py::test_distrib_init PASSED suites/distrib_plugin/distrib_test.py::test_distrib_ PASSED suites/dna_plugin/dna_test.py::test_dna_init PASSED suites/dna_plugin/dna_test.py::test_dna_ PASSED suites/ds_logs/ds_logs_test.py::test_ds_logs_init PASSED suites/ds_logs/ds_logs_test.py::test_ds_logs_ PASSED suites/dynamic-plugins/test_dynamic_plugins.py::test_dynamic_plugins PASSED suites/filter/filter_test.py::test_filter_init PASSED suites/filter/filter_test.py::test_filter_escaped PASSED suites/filter/filter_test.py::test_filter_search_original_attrs PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_supported_features PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-False-oper_attr_list0-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[-True-oper_attr_list1-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-False-oper_attr_list2-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-True-oper_attr_list3] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[ou=people,dc=example,dc=com-True-oper_attr_list3-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py:
:test_search_basic[ou=people,dc=example,dc=com-True-oper_attr_list3-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-False-oper_attr_list4-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[uid=all_attrs_test,ou=people,dc=example,dc=com-True-oper_attr_list5-objectClass] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6-*] PASSED suites/filter/rfc3673_all_oper_attrs_test.py::test_search_basic[cn=config-False-oper_attr_list6-objectClass] PASSED suites/get_effective_rights/ger_test.py::test_ger_init PASSED suites/get_effective_rights/ger_test.py::test_ger_ PASSED suites/gssapi_repl/gssapi_repl_test.py::test_gssapi_repl PASSED suites/ldapi/ldapi_test.py::test_ldapi_init PASSED suites/ldapi/ldapi_test.py::test_ldapi_ PASSED suites/linkedattrs_plugin/linked_attrs_test.py::test_linked_attrs_init PASSED suites/linkedattrs_plugin/linked_attrs_test.py::test_linked_attrs_ PASSED suites/mapping_tree/mapping_tree_test.py::test_mapping_tree_init PASSED suites/mapping_tree/mapping_tree_test.py::test_mapping_tree_ PASSED suites/memberof_plugin/memberof_test.py::test_memberof_auto_add_oc PASSED suites/memory_leaks/range_search_test.py::test_range_search_init FAILED suites/memory_leaks/range_search_test.py::t
est_range_search PASSED suites/memory_leaks/range_search_test.py::test_range_search ERROR suites/monitor/monitor_test.py::test_monitor_init PASSED suites/monitor/monitor_test.py::test_monitor_ PASSED suites/paged_results/paged_results_test.py::test_search_success[6-5] PASSED suites/paged_results/paged_results_test.py::test_search_success[5-5] PASSED suites/paged_results/paged_results_test.py::test_search_success[5-25] PASSED suites/paged_results/paged_results_test.py::test_search_limits_fail[50-200-cn=config,cn=ldbm database,cn=plugins,cn=config-nsslapd-idlistscanlimit-100-UNWILLING_TO_PERFORM] PASSED suites/paged_results/paged_results_test.py::test_search_limits_fail[5-15-cn=config-nsslapd-timelimit-20-UNAVAILABLE_CRITICAL_EXTENSION] PASSED suites/paged_results/paged_results_test.py::test_search_limits_fail[21-50-cn=config-nsslapd-sizelimit-20-SIZELIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_limits_fail[21-50-cn=config-nsslapd-pagedsizelimit-5-SIZELIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_limits_fail[5-50-cn=config,cn=ldbm database,cn=plugins,cn=config-nsslapd-lookthroughlimit-20-ADMINLIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_sort_success PASSED suites/paged_results/paged_results_test.py::test_search_abandon PASSED suites/paged_results/paged_results_test.py::test_search_with_timelimit PASSED suites/paged_results/paged_results_test.py::test_search_dns_ip_aci[dns = "localhost.localdomain"] PASSED suites/paged_results/paged_results_test.py::test_search_dns_ip_aci[ip = "::1" or ip = "127.0.0.1"] PASSED suites/paged_results/paged_results_test.py::test_search_multiple_paging PASSED suites/paged_results/paged_results_test.py::test_search_invalid_cookie[1000] PASSED suites/paged_results/paged_results_test.py::test_search_invalid_cookie[-1] PASSED suites/paged_results/paged_results_test.py::test_search_abandon_with_zero_size PASSED suites/paged_results/paged_results_test.py::test_search_pagedsizelimit_success PASSE
D suites/paged_results/paged_results_test.py::test_search_nspagedsizelimit[5-15-PASS] PASSED suites/paged_results/paged_results_test.py::test_search_nspagedsizelimit[15-5-SIZELIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_paged_limits[conf_attr_values0-ADMINLIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_paged_limits[conf_attr_values1-PASS] PASSED suites/paged_results/paged_results_test.py::test_search_paged_user_limits[conf_attr_values0-ADMINLIMIT_EXCEEDED] PASSED suites/paged_results/paged_results_test.py::test_search_paged_user_limits[conf_attr_values1-PASS] PASSED suites/paged_results/paged_results_test.py::test_ger_basic PASSED suites/paged_results/paged_results_test.py::test_multi_suffix_search FAILED suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[None] PASSED suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[-1] PASSED suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_success[1000] PASSED suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_failure[0] PASSED suites/paged_results/paged_results_test.py::test_maxsimplepaged_per_conn_failure[1] PASSED suites/pam_passthru_plugin/pam_test.py::test_pam_init PASSED suites/pam_passthru_plugin/pam_test.py::test_pam_ PASSED suites/passthru_plugin/passthru_test.py::test_passthru_init PASSED suites/passthru_plugin/passthru_test.py::test_passthru_ PASSED suites/password/password_test.py::test_password_init PASSED suites/password/password_test.py::test_password_delete_specific_password PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin_init PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin PASSED suites/password/pwdAdmin_test.py::test_pwdAdmin_config_validation PASSED suites/password/pwdPolicy_attribute_test.py::test_change_pwd[on-off-UNWILLING_TO_PERFORM] PASSED suites/password/pwdPolicy_attribute_test.py::test_change_pwd[off-off-UNWILLING_TO_PERFORM] PASSED suites/password/pwdPolicy_attribute
_test.py::test_change_pwd[off-on-None] PASSED suites/password/pwdPolicy_attribute_test.py::test_change_pwd[on-on-None] PASSED suites/password/pwdPolicy_attribute_test.py::test_pwd_min_age PASSED suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[off-off] PASSED suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[on-off] PASSED suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_no_restrictions[off-on] PASSED suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_restrictions[cn=config] PASSED suites/password/pwdPolicy_inherit_global_test.py::test_entry_has_restrictions[cn="cn=nsPwPolicyEntry,ou=People,dc=example,dc=com",cn=nsPwPolicyContainer,ou=People,dc=example,dc=com] PASSED suites/password/pwdPolicy_syntax_test.py::test_pwdPolicy_syntax PASSED suites/password/pwdPolicy_warning_test.py::test_different_values[ ] PASSED suites/password/pwdPolicy_warning_test.py::test_different_values[junk123] PASSED suites/password/pwdPolicy_warning_test.py::test_different_values[on] PASSED suites/password/pwdPolicy_warning_test.py::test_different_values[off] PASSED suites/password/pwdPolicy_warning_test.py::test_expiry_time PASSED suites/password/pwdPolicy_warning_test.py::test_password_warning[passwordSendExpiringTime-off] PASSED suites/password/pwdPolicy_warning_test.py::test_password_warning[passwordWarning-3600] PASSED suites/password/pwdPolicy_warning_test.py::test_with_different_password_states PASSED suites/password/pwdPolicy_warning_test.py::test_default_behavior PASSED suites/password/pwdPolicy_warning_test.py::test_with_local_policy PASSED suites/password/pwp_history_test.py::test_pwp_history_test PASSED suites/posix_winsync_plugin/posix_winsync_test.py::test_posix_winsync_init PASSED suites/posix_winsync_plugin/posix_winsync_test.py::test_posix_winsync_ PASSED suites/psearch/psearch_test.py::test_psearch_init PASSED suites/psearch/psearch_test.py::test_psearch_ PASSED suites/referint_plugin/referint_test.py::test_referint_init PASSED su
ites/referint_plugin/referint_test.py::test_referint_ PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_init PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean_restart PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_clean_force PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort_restart PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_abort_certify PASSED suites/replication/cleanallruv_test.py::test_cleanallruv_stress_clean FAILED suites/replication/wait_for_async_feature_test.py::test_not_int_value PASSED suites/replication/wait_for_async_feature_test.py::test_multi_value PASSED suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr0] PASSED suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr1] PASSED suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr2] PASSED suites/replication/wait_for_async_feature_test.py::test_value_check[waitfor_async_attr3] PASSED suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr0] PASSED suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr1] PASSED suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr2] PASSED suites/replication/wait_for_async_feature_test.py::test_behavior_with_value[waitfor_async_attr3] PASSED suites/replsync_plugin/repl_sync_test.py::test_repl_sync_init PASSED suites/replsync_plugin/repl_sync_test.py::test_repl_sync_ PASSED suites/resource_limits/res_limits_test.py::test_res_limits_init PASSED suites/resource_limits/res_limits_test.py::test_res_limits_ PASSED suites/retrocl_plugin/retrocl_test.py::test_retrocl_init PASSED suites/retrocl_plugin/retrocl_test.py::test_retrocl_ PASSED suites/reverpwd_plugin/reverpwd_test.py::te
st_reverpwd_init PASSED suites/reverpwd_plugin/reverpwd_test.py::test_reverpwd_ PASSED suites/roles_plugin/roles_test.py::test_roles_init PASSED suites/roles_plugin/roles_test.py::test_roles_ PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_init PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_specific_time PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_day_of_week PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_denied_ip PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_denied_host PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_allowed_ip PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_access_allowed_host PASSED suites/rootdn_plugin/rootdn_plugin_test.py::test_rootdn_config_validate PASSED suites/sasl/sasl_test.py::test_sasl_init PASSED suites/sasl/sasl_test.py::test_sasl_ PASSED suites/schema/test_schema.py::test_schema_comparewithfiles PASSED suites/schema_reload_plugin/schema_reload_test.py::test_schema_reload_init PASSED suites/schema_reload_plugin/schema_reload_test.py::test_schema_reload_ PASSED suites/snmp/snmp_test.py::test_snmp_init PASSED suites/snmp/snmp_test.py::test_snmp_ PASSED suites/ssl/ssl_test.py::test_ssl_init PASSED suites/ssl/ssl_test.py::test_ssl_ PASSED suites/syntax_plugin/syntax_test.py::test_syntax_init PASSED suites/syntax_plugin/syntax_test.py::test_syntax_ PASSED suites/usn_plugin/usn_test.py::test_usn_init PASSED suites/usn_plugin/usn_test.py::test_usn_ PASSED suites/views_plugin/views_test.py::test_views_init PASSED suites/views_plugin/views_test.py::test_views_ PASSED suites/vlv/vlv_test.py::test_vlv_init PASSED suites/vlv/vlv_test.py::test_vlv_ PASSED suites/whoami_plugin/whoami_test.py::test_whoami_init PASSED suites/whoami_plugin/whoami_test.py::test_whoami_ PASSED ==================================== ERRORS ==================================== ______________________ ERROR at setup of test_ticket4026 _______________________ request = <SubRequ
est 'topology' for <Function 'test_ticket4026'>> @pytest.fixture(scope="module") def topology(request): global installation1_prefix if installation1_prefix: args_instance[SER_DEPLOYED_DIR] = installation1_prefix # Creating master 1... master1 = DirSrv(verbose=False) if installation1_prefix: args_instance[SER_DEPLOYED_DIR] = installation1_prefix args_instance[SER_HOST] = HOST_MASTER_1 args_instance[SER_PORT] = PORT_MASTER_1 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master1.allocate(args_master) instance_master1 = master1.exists() if instance_master1: master1.delete() master1.create() master1.open() master1.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_1) # Creating master 2... master2 = DirSrv(verbose=False) if installation1_prefix: args_instance[SER_DEPLOYED_DIR] = installation1_prefix args_instance[SER_HOST] = HOST_MASTER_2 args_instance[SER_PORT] = PORT_MASTER_2 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master2.allocate(args_master) instance_master2 = master2.exists() if instance_master2: master2.delete() master2.create() master2.open() master2.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_2) # Creating master 3... master3 = DirSrv(verbose=False) if installation1_prefix: args_instance[SER_DEPLOYED_DIR] = installation1_prefix args_instance[SER_HOST] = HOST_MASTER_3 args_instance[SER_PORT] = PORT_MASTER_3 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_3 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master3.allocate(args_master) instance_master3 = master3.exists() if instance_master3: master3.delete() master3.create() master3.open() master3.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_3) # # Create all the agreements # # Creating agreemen
t from master 1 to master 2 properties = {RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties) if not m1_m2_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m1_m2_agmt) # Creating agreement from master 1 to master 3 # properties = {RA_NAME: r'meTo_$host:$port', # RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], # RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], # RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], # RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} # m1_m3_agmt = master1.agreement.create(suffix=SUFFIX, host=master3.host, port=master3.port, properties=properties) # if not m1_m3_agmt: # log.fatal("Fail to create a master -> master replica agreement") # sys.exit(1) # log.debug("%s created" % m1_m3_agmt) # Creating agreement from master 2 to master 1 properties = {RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties) if not m2_m1_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m2_m1_agmt) # Creating agreement from master 2 to master 3 properties = {RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m2_m3_agmt = master2.agreement.create(suffix=SUFFIX, host=master3.host, port=master3.port, properties=properties) if not m2_m3_agmt: log.fatal("Fail to create a
 master -> master replica agreement") sys.exit(1) log.debug("%s created" % m2_m3_agmt) # Creating agreement from master 3 to master 1 # properties = {RA_NAME: r'meTo_$host:$port', # RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], # RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], # RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], # RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} # m3_m1_agmt = master3.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties) # if not m3_m1_agmt: # log.fatal("Fail to create a master -> master replica agreement") # sys.exit(1) # log.debug("%s created" % m3_m1_agmt) # Creating agreement from master 3 to master 2 properties = {RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m3_m2_agmt = master3.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties) if not m3_m2_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m3_m2_agmt) # Allow the replicas to get situated with the new agreements... time.sleep(5) # # Initialize all the agreements # master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2) master1.waitForReplInit(m1_m2_agmt) time.sleep(5) # just to be safe master2.agreement.init(SUFFIX, HOST_MASTER_3, PORT_MASTER_3) master2.waitForReplInit(m2_m3_agmt) # Check replication is working... if master1.testReplication(DEFAULT_SUFFIX, master2): log.info('Replication is working.') else: log.fatal('Replication is not working.') assert False # Delete each instance in the end def fin(): for master in (master1, master2, master3): master.delete() request.addfinalizer(fin) # Clear out the tmp dir master1.clearTmpDir(__file__) > return TopologyReplication(master1, master2, master3) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtes
ts/tests/tickets/ticket48342_test.py>:189: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48342_test.py>:29: in __init__ master3.open() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f4b0be1b8>, saslmethod = None certdir = None, starttls = False, connOnly = False def open(self, saslmethod=None, certdir=None, starttls=False, connOnly=False): ''' It opens a ldap bound connection to dirsrv so that online administrative tasks are possible. It binds with the binddn property, then it initializes various fields from DirSrv (via __initPart2) The state changes -> DIRSRV_STATE_ONLINE @param self @param saslmethod - None, or GSSAPI @param certdir - Certificate directory for TLS @return None @raise LDAPError ''' uri = self.toLDAPURL() if self.verbose: self.log.info('open(): Connecting to uri %s' % uri) if hasattr(ldap, 'PYLDAP_VERSION') and MAJOR >= 3: SimpleLDAPObject.__init__(self, uri, bytes_mode=False) else: SimpleLDAPObject.__init__(self, uri) if certdir: """ We have a certificate directory, so lets start up TLS negotiations """ self.set_option(ldap.OPT_X_TLS_CACERTFILE, certdir) if certdir or starttls: try: self.start_tls_s() except ldap.LDAPError as e: log.fatal('TLS negotiation failed: %s' % str(e)) raise e if saslmethod and saslmethod.lower() == 'gssapi': """ Perform kerberos/gssapi authentication """ try: sasl_auth = ldap.sasl.gssapi("") self.sasl_interactive_bind_s("", sasl_auth) except ldap.LOCAL_ERROR as e: # No Ticket - ultimately invalid credentials log.debug("Error: No Ticket (%s)" % str(e)) raise ldap.INVALID_CREDENTIALS except ldap.LDAPError as e: log.debug("SASL/GSSAPI Bind Failed: %s" % str(e)) raise e elif saslmethod: # Unknown or unsupported method log.debug('Unsupported SASL method: %s' % saslmethod) raise ldap.UNWILLING_TO_PERFORM elif self.can_autobind(): # Connect via ldapi, and 
autobind. # do nothing: the bind is complete. if self.verbose: log.info("open(): Using root autobind ...") sasl_auth = ldap.sasl.external() self.sasl_interactive_bind_s("", sasl_auth) else: """ Do a simple bind """ try: self.simple_bind_s(ensure_str(self.binddn), self.bindpw) except ldap.SERVER_DOWN as e: # TODO add server info in exception log.debug("Cannot connect to %r" % uri) > raise e E SERVER_DOWN: {'desc': "Can't contact LDAP server"} <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1043: SERVER_DOWN ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ('Update succeeded: status ', '0 Total update succeeded') ('Update succeeded: status ', '0 Total update succeeded') ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}a/p4bBr7GKb8rsOeesoQA2qDPb3BAl392SsmGOjdnKwM6oPEs8EqEd6k4v1mfWO7BmNYp9KSVmPXCgdxihkKteHiOH5DDBab INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}BlK3tUgS7nT1AweY729luB752VT5hGnrJ6XfTkUU8SFwXhp+B0qGMLsmLOggkIb1x8YJgzJOuTbUso0p1RlWw3VIjRYkz6JG INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}Z0J01TAznnVbwyS0l0MnrDdFHfklLqvi7omHNEcJrThD5N4uGiMoPuuxHBCZk4Pnja2p0U1xv/stqd+cs0AG3Wj6H9ydggoh DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexamp
le\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:tickets.ticket48342_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created INFO:lib389:Starting total init cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:lib389:Starting total init cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:tickets.ticket48342_test:Replication is working. ____________________ ERROR at setup of test_maxbersize_repl ____________________ request = <SubRequest 'topology' for <Function 'test_maxbersize_repl'>> @pytest.fixture(scope="module") def topology(request): """Create Replication Deployment""" # Creating master 1... if DEBUGGING: master1 = DirSrv(verbose=True) else: master1 = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_MASTER_1 args_instance[SER_PORT] = PORT_MASTER_1 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master1.allocate(args_master) instance_master1 = master1.exists() if instance_master1: master1.delete() master1.create() master1.open() master1.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_1) # Creating master 2... if DEBUGGING: master2 = DirSrv(verbose=True) else: master2 = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_MASTER_2 args_instance[SER_PORT] = PORT_MASTER_2 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master2.allocate(args_master) instance_master2 = master2.exists() if instance_master2: master2.delete() master2.create() master2.
open() master2.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_2) # # Create all the agreements # # Creating agreement from master 1 to master 2 properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties) if not m1_m2_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m1_m2_agmt) # Creating agreement from master 2 to master 1 properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties) if not m2_m1_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m2_m1_agmt) # Allow the replicas to get situated with the new agreements... time.sleep(5) # # Initialize all the agreements # master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2) > master1.waitForReplInit(m1_m2_agmt) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return self.replica.wait_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init 
done, haserror = self.check_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, "(objectclass=*)", attrlist) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = self.result(res) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f542a21b8> func = <built-in method result4 of LDAP object at 0x7f6f54487170> args = (17, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ld
ap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}6mtvDSnEi0vdJT6SplwM5R7N8lt1f8/6UiCgWORqyUsx6qSp4M0iucrlf9BD9yFLHAfAPaHgE7D2PwIpKQupJBXsCH5PiUPM INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}JtpycIah637r6u2FfUiCJULPGEDZVSnUA5hyemmdgr1q6x+j/zOwPf4u6EKi+lkrs0PCblp6S8UsYrikNgwkaCekrW0IXloN INFO:lib389:Starting total init cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ______________ ERROR at setup of test_config_listen_backport_size ______________ request = <SubRequest 'topology' for <Function 'test_maxbersize_repl'>> @pytest.fixture(scope="module") def topology(request): """Create Replication Deployment""" # Creating master 1... if DEBUGGING: master1 = DirSrv(verbose=True) else: master1 = DirSrv(verbose=False) args_instance[SER_
HOST] = HOST_MASTER_1 args_instance[SER_PORT] = PORT_MASTER_1 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master1.allocate(args_master) instance_master1 = master1.exists() if instance_master1: master1.delete() master1.create() master1.open() master1.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_1) # Creating master 2... if DEBUGGING: master2 = DirSrv(verbose=True) else: master2 = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_MASTER_2 args_instance[SER_PORT] = PORT_MASTER_2 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master2.allocate(args_master) instance_master2 = master2.exists() if instance_master2: master2.delete() master2.create() master2.open() master2.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_2) # # Create all the agreements # # Creating agreement from master 1 to master 2 properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties) if not m1_m2_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m1_m2_agmt) # Creating agreement from master 2 to master 1 properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.
port, properties=properties) if not m2_m1_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m2_m1_agmt) # Allow the replicas to get situated with the new agreements... time.sleep(5) # # Initialize all the agreements # master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2) > master1.waitForReplInit(m1_m2_agmt) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return self.replica.wait_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init done, haserror = self.check_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, "(objectclass=*)", attrlist) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = self.result(res) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.
7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f542a21b8> func = <built-in method result4 of LDAP object at 0x7f6f54487170> args = (17, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ________________ ERROR at setup of test_config_deadlock_policy _________________ request = <SubRequest 'topology' for <Function 'test_maxbersize_repl'>> @pytest.fixture(scope="module") def topology(request): """Create Replication Deployment""" # Creating master 1... if DEBUGGING: master1 = DirSrv(verbose=True) else: master1 = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_MASTER_1 args_instance[SER_PORT] = PORT_MASTER_1 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_1 args_instance[SER_
CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master1.allocate(args_master) instance_master1 = master1.exists() if instance_master1: master1.delete() master1.create() master1.open() master1.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_1) # Creating master 2... if DEBUGGING: master2 = DirSrv(verbose=True) else: master2 = DirSrv(verbose=False) args_instance[SER_HOST] = HOST_MASTER_2 args_instance[SER_PORT] = PORT_MASTER_2 args_instance[SER_SERVERID_PROP] = SERVERID_MASTER_2 args_instance[SER_CREATION_SUFFIX] = DEFAULT_SUFFIX args_master = args_instance.copy() master2.allocate(args_master) instance_master2 = master2.exists() if instance_master2: master2.delete() master2.create() master2.open() master2.replica.enableReplication(suffix=SUFFIX, role=REPLICAROLE_MASTER, replicaId=REPLICAID_MASTER_2) # # Create all the agreements # # Creating agreement from master 1 to master 2 properties = {RA_NAME: 'meTo_' + master2.host + ':' + str(master2.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m1_m2_agmt = master1.agreement.create(suffix=SUFFIX, host=master2.host, port=master2.port, properties=properties) if not m1_m2_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debug("%s created" % m1_m2_agmt) # Creating agreement from master 2 to master 1 properties = {RA_NAME: 'meTo_' + master1.host + ':' + str(master1.port), RA_BINDDN: defaultProperties[REPLICATION_BIND_DN], RA_BINDPW: defaultProperties[REPLICATION_BIND_PW], RA_METHOD: defaultProperties[REPLICATION_BIND_METHOD], RA_TRANSPORT_PROT: defaultProperties[REPLICATION_TRANSPORT]} m2_m1_agmt = master2.agreement.create(suffix=SUFFIX, host=master1.host, port=master1.port, properties=properties) if not m2_m1_agmt: log.fatal("Fail to create a master -> master replica agreement") sys.exit(1) log.debu
g("%s created" % m2_m1_agmt) # Allow the replicas to get situated with the new agreements... time.sleep(5) # # Initialize all the agreements # master1.agreement.init(SUFFIX, HOST_MASTER_2, PORT_MASTER_2) > master1.waitForReplInit(m1_m2_agmt) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/config/config_test.py>:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:2177: in waitForReplInit return self.replica.wait_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:596: in wait_init done, haserror = self.check_init(agmtdn) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/replica.py>:548: in check_init agmtdn, ldap.SCOPE_BASE, "(objectclass=*)", attrlist) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1574: in getEntry restype, obj = self.result(res) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:127: in inner objtype, data = f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes <http://vm-058-081.abc.idm.lab.eng.brq.redhat.
com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f542a21b8> func = <built-in method result4 of LDAP object at 0x7f6f54487170> args = (17, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ____________________ ERROR at teardown of test_range_search ____________________ def fin(): standalone.delete() if not standalone.has_asan(): sbin_dir = standalone.get_sbin_dir() > valgrind_disable(sbin_dir) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/memory_leaks/range_search_test.py>:61: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ sbin_dir = '/usr/sbin' def valgrind_disable(sbin_dir): ''' Restore the ns-slapd binary to its original state - the server instances are expected to be stopped. Note - selinux is enabled at the en
d of this process. :param sbin_dir - the location of the ns-slapd binary (e.g. /usr/sbin) :raise ValueError :raise EnvironmentError: If script is not run as 'root' ''' if os.geteuid() != 0: log.error('This script must be run as root to use valgrind') raise EnvironmentError nsslapd_orig = '%s/ns-slapd' % sbin_dir nsslapd_backup = '%s/ns-slapd.original' % sbin_dir # Restore the original ns-slapd try: shutil.copyfile(nsslapd_backup, nsslapd_orig) except IOError as e: log.fatal('valgrind_disable: failed to restore ns-slapd, error: %s' % e.strerror) > raise ValueError('failed to restore ns-slapd, error: %s' % e.strerror) E ValueError: failed to restore ns-slapd, error: Text file busy <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/utils.py>:288: ValueError ----------------------------- Captured stderr call ----------------------------- INFO:suites.memory_leaks.range_search_test:Running test_range_search... CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user1,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user2,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user3,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user4,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user5,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user6,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user7,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_lea
ks.range_search_test:test_range_search: Failed to add test user uid=user8,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user9,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user10,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user11,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user12,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user13,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user14,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user15,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user16,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user17,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user18,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user19,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user20,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user21
,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user22,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user23,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user24,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user25,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user26,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user27,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user28,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user29,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user30,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user31,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user32,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user33,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user34,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memo
ry_leaks.range_search_test:test_range_search: Failed to add test user uid=user35,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user36,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user37,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user38,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user39,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user40,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user41,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user42,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user43,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user44,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user45,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user46,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user47,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user ui
d=user48,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user49,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user50,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user51,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user52,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user53,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user54,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user55,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user56,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user57,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user58,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user59,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user60,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user61,dc=example,dc=com: error Can't contact LDAP server CRITICAL:sui
tes.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user62,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user63,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user64,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user65,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user66,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user67,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user68,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user69,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user70,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user71,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user72,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user73,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user74,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test
 user uid=user75,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user76,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user77,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user78,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user79,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user80,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user81,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user82,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user83,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user84,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user85,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user86,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user87,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user88,dc=example,dc=com: error Can't contact LDAP server CRIT
ICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user89,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user90,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user91,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user92,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user93,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user94,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user95,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user96,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user97,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user98,dc=example,dc=com: error Can't contact LDAP server CRITICAL:suites.memory_leaks.range_search_test:test_range_search: Failed to add test user uid=user99,dc=example,dc=com: error Can't contact LDAP server INFO:suites.memory_leaks.range_search_test:test_range_search: PASSED --------------------------- Captured stdout teardown --------------------------- Instance slapd-standalone removed. --------------------------- Captured stderr teardown --------------------------- CRITICAL:lib389.utils:valgrind_disable: failed to restore ns-slapd, error: Text file busy ========================
=========== FAILURES =================================== ______________________________ test_ticket1347760 ______________________________ topology = <tickets.ticket1347760_test.TopologyStandalone object at 0x7f6f5421c210> def test_ticket1347760(topology): """ Prevent revealing the entry info to whom has no access rights. """ log.info('Testing Bug 1347760 - Information disclosure via repeated use of LDAP ADD operation, etc.') log.info('Disabling accesslog logbuffering') topology.standalone.modify_s(CONFIG_DN, [(ldap.MOD_REPLACE, 'nsslapd-accesslog-logbuffering', 'off')]) log.info('Bind as {%s,%s}' % (DN_DM, PASSWORD)) topology.standalone.simple_bind_s(DN_DM, PASSWORD) log.info('Adding ou=%s a bind user belongs to.' % BOU) topology.standalone.add_s(Entry((BINDOU, { 'objectclass': 'top organizationalunit'.split(), 'ou': BOU}))) log.info('Adding a bind user.') topology.standalone.add_s(Entry((BINDDN, {'objectclass': "top person organizationalPerson inetOrgPerson".split(), 'cn': 'bind user', 'sn': 'user', 'userPassword': BINDPW}))) log.info('Adding a test user.') topology.standalone.add_s(Entry((TESTDN, {'objectclass': "top person organizationalPerson inetOrgPerson".split(), 'cn': 'test user', 'sn': 'user', 'userPassword': TESTPW}))) log.info('Deleting aci in %s.' % DEFAULT_SUFFIX) topology.standalone.modify_s(DEFAULT_SUFFIX, [(ldap.MOD_DELETE, 'aci', None)]) log.info('Bind case 1. the bind user has no rights to read the entry itself, bind should be successful.') log.info('Bind as {%s,%s} who has no access rights.' % (BINDDN, BINDPW)) try: topology.standalone.simple_bind_s(BINDDN, BINDPW) except ldap.LDAPError as e: log.info('Desc ' + e.message['desc']) assert False file_path = os.path.join(topology.standalone.prefix, 'var/log/dirsrv/slapd-%s/access' % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-standalone/access' tickets/ticket1347760_test.py:236: IOError ---------------------------- Captured stdout setup ------------------
----------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket1347760_test:Testing Bug 1347760 - Information disclosure via repeated use of LDAP ADD operation, etc. INFO:tickets.ticket1347760_test:Disabling accesslog logbuffering INFO:tickets.ticket1347760_test:Bind as {cn=Directory Manager,password} INFO:tickets.ticket1347760_test:Adding ou=BOU a bind user belongs to. INFO:tickets.ticket1347760_test:Adding a bind user. INFO:tickets.ticket1347760_test:Adding a test user. INFO:tickets.ticket1347760_test:Deleting aci in dc=example,dc=com. INFO:tickets.ticket1347760_test:Bind case 1. the bind user has no rights to read the entry itself, bind should be successful. INFO:tickets.ticket1347760_test:Bind as {uid=buser123,ou=BOU,dc=example,dc=com,buser123} who has no access rights. ______________________________ test_ticket47431_1 ______________________________ topology = <tickets.ticket47431_test.TopologyStandalone object at 0x7f6f5397f9d0> def test_ticket47431_1(topology): ''' nsslapd-pluginarg0: uid nsslapd-pluginarg1: mail nsslapd-pluginarg2: userpassword <== repeat 27 times nsslapd-pluginarg3: , nsslapd-pluginarg4: dc=example,dc=com The duplicated values are removed by str2entry_dupcheck as follows: [..] - str2entry_dupcheck: 27 duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit check,cn=plugins,cn=config. Extra values ignored. ''' log.info("Ticket 47431 - 1: Check 26 duplicate values are treated as one...") expected = "str2entry_dupcheck - . .. .cache duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit check,cn=plugins,cn=config." log.debug('modify_s %s' % DN_7BITPLUGIN) try: topology.standalone.modify_s(DN_7BITPLUGIN, [(ldap.MOD_REPLACE, 'nsslapd-pluginarg0', "uid"), (ldap.MOD_REPLACE, 'nsslapd-pluginarg1', "mail"), (ldap.MOD_REPLACE, 'nsslapd-pluginarg2', "userpassword"), (ldap.MOD_REPLACE, 'nsslapd-pluginarg3', ","), (ldap.MOD_REPLACE, 'nsslapd-pluginarg4', SU
FFIX)]) except ValueError: log.error('modify failed: Some problem occured with a value that was provided') assert False arg2 = "nsslapd-pluginarg2: userpassword" topology.standalone.stop(timeout=10) dse_ldif = topology.standalone.confdir + '/dse.ldif' os.system('mv %s %s.47431' % (dse_ldif, dse_ldif)) os.system('sed -e "s/\\(%s\\)/\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1\\n\\1/" %s.47431 > %s' % (arg2, dse_ldif, dse_ldif)) topology.standalone.start(timeout=10) cmdline = 'egrep -i "%s" %s' % (expected, topology.standalone.errlog) p = os.popen(cmdline, "r") line = p.readline() if line == "": log.error('Expected error "%s" not logged in %s' % (expected, topology.standalone.errlog)) > assert False E assert False tickets/ticket47431_test.py:110: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47431_test:Ticket 47431 - 1: Check 26 duplicate values are treated as one... DEBUG:tickets.ticket47431_test:modify_s cn=7-bit check,cn=plugins,cn=config grep: /var/log/dirsrv/slapd-standalone/error: No such file or directory ERROR:tickets.ticket47431_test:Expected error "str2entry_dupcheck - . .. .cache duplicate values for attribute type nsslapd-pluginarg2 detected in entry cn=7-bit check,cn=plugins,cn=config." not logged in /var/log/dirsrv/slapd-standalone/error _______________________________ test_ticket47462 _______________________________ topology = <tickets.ticket47462_test.TopologyMaster1Master2 object at 0x7f6f54036d90> def test_ticket47462(topology): """ Test that AES properly replaces DES during an update/restart, and that replication also works correctly. """ # # First set config as if it's an older version. Set DES to use # libdes-plugin, MMR to depend on DES, delete the existing AES plugin, # and set a DES password for the replication agreement. # # Add an extra attribute to the DES plugin args # try: topology.master1.modify_s(DES_PLUGIN, [
(ldap.MOD_REPLACE, 'nsslapd-pluginEnabled', 'on')]) except ldap.LDAPError as e: log.fatal('Failed to enable DES plugin, error: ' + e.message['desc']) assert False try: topology.master1.modify_s(DES_PLUGIN, [(ldap.MOD_ADD, 'nsslapd-pluginarg2', 'description')]) except ldap.LDAPError as e: log.fatal('Failed to reset DES plugin, error: ' + e.message['desc']) assert False try: topology.master1.modify_s(MMR_PLUGIN, [(ldap.MOD_DELETE, 'nsslapd-plugin-depends-on-named', 'AES')]) except ldap.NO_SUCH_ATTRIBUTE: pass except ldap.LDAPError as e: log.fatal('Failed to reset MMR plugin, error: ' + e.message['desc']) assert False # # Delete the AES plugin # try: topology.master1.delete_s(AES_PLUGIN) except ldap.NO_SUCH_OBJECT: pass except ldap.LDAPError as e: log.fatal('Failed to delete AES plugin, error: ' + e.message['desc']) assert False # restart the server so we must use DES plugin topology.master1.restart(timeout=10) # # Get the agmt dn, and set the password # try: entry = topology.master1.search_s('cn=config', ldap.SCOPE_SUBTREE, 'objectclass=nsDS5ReplicationAgreement') if entry: agmt_dn = entry[0].dn log.info('Found agmt dn (%s)' % agmt_dn) else: log.fatal('No replication agreements!') assert False except ldap.LDAPError as e: log.fatal('Failed to search for replica credentials: ' + e.message['desc']) assert False try: properties = {RA_BINDPW: "password"} topology.master1.agreement.setProperties(None, agmt_dn, None, properties) log.info('Successfully modified replication agreement') except ValueError: log.error('Failed to update replica agreement: ' + AGMT_DN) assert False # # Check replication works with the new DES password # try: topology.master1.add_s(Entry((USER1_DN, {'objectclass': "top person".split(), 'sn': 'sn', 'description': 'DES value to convert', 'cn': 'test_user'}))) loop = 0 ent = None while loop <= 10: try: ent = topology.master2.getEntry(USER1_DN, ldap.SCOPE_BASE, "(objectclass=*)") break except ldap.NO_SUCH_OBJECT: time.sleep(1) loop += 1 if not ent: log.fatal('Replication test failed fo user1!') assert
 False else: log.info('Replication test passed') except ldap.LDAPError as e: log.fatal('Failed to add test user: ' + e.message['desc']) assert False # # Add a backend (that has no entries) # try: topology.master1.backend.create("o=empty", {BACKEND_NAME: "empty"}) except ldap.LDAPError as e: log.fatal('Failed to create extra/empty backend: ' + e.message['desc']) assert False # # Run the upgrade... # > topology.master1.upgrade('online') tickets/ticket47462_test.py:269: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:2500: in upgrade DirSrvTools.runUpgrade(self.prefix, online) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ prefix = '/usr', online = True @staticmethod def runUpgrade(prefix, online=True): ''' Run "setup-ds.pl --update" We simply pass in one DirSrv isntance, and this will update all the instances that are in this prefix. For the update to work we must fix/adjust the permissions of the scripts in: /prefix/lib[64]/dirsrv/slapd-INSTANCE/ ''' if not prefix: prefix = '' # This is an RPM run - check if /lib exists, if not use /lib64 if os.path.isdir('/usr/lib/dirsrv'): libdir = '/usr/lib/dirsrv/' else: if os.path.isdir('/usr/lib64/dirsrv'): libdir = '/usr/lib64/dirsrv/' else: log.fatal('runUpgrade: failed to find slapd lib dir!') assert False else: # Standard prefix lib location if os.path.isdir('/usr/lib64/dirsrv'): libdir = '/usr/lib64/dirsrv/' else: libdir = '/lib/dirsrv/' # Gather all the instances so we can adjust the permissions, otherwise servers = [] path = prefix + '/etc/dirsrv' > for files in os.listdir(path): E OSError: [Errno 2] No such file or directory: '/usr/etc/dirsrv' ../../../lib389/lib389/tools.py:932: OSError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ('Update succeeded: status ', '0 Total update succeeded') ---------------------------- Captured stderr setup ------
----------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}8CwmSw5cC9cNSfTE4dILAhrRU2CVrnAUPnumNxhRizwGHMk83wdZJG9W6TjgWxV0E+taSeLIRbssrAoWhPGAImebdYNn6Aai INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}AIs0j16jV540vzUgF1dHce45PjVrFPVT1FYhnNFBCJeQa59urY7h3wgzDCzRumtpgo4v20EP9vDJPZzKFNRr87tgQe6mUZsK DEBUG:tickets.ticket47462_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created INFO:lib389:Starting total init cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:tickets.ticket47462_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47462_test:Found agmt dn (cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) INFO:tickets.ticket47462_test:Successfully modified replication agreement INFO:tickets.ticket47462_test:Replication test passed INFO:lib389:List backend with suffix=o=empty INFO:lib389:Creating a local backend INFO:lib389:List backend cn=empty,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=empty,cn=ldbm database,cn=plugins,cn=config cn: empty nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-master_1/db/empty nsslapd-dncachememsize: 10485760 nsslapd-readonly: off nsslapd-require-index: off nsslapd-suffix: o=empty objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance _______________________________ test_ticket47536 _______________________________ topology = <tickets.ticket47536_test.TopologyReplication object at 0x7f6f5320fcd0> def test_ticket47536(topology): """ Set up 2way
 MMR: master_1 ----- startTLS -----> master_2 master_1 <-- TLS_clientAuth -- master_2 Check CA cert, Server-Cert and Key are retrieved as PEM from cert db when the server is started. First, the file names are not specified and the default names derived from the cert nicknames. Next, the file names are specified in the encryption config entries. Each time add 5 entries to master 1 and 2 and check they are replicated. """ log.info("Ticket 47536 - Allow usage of OpenLDAP libraries that don't use NSS for crypto") create_keys_certs(topology) config_tls_agreements(topology) add_entry(topology.master1, 'master1', 'uid=m1user', 0, 5) add_entry(topology.master2, 'master2', 'uid=m2user', 0, 5) time.sleep(1) log.info('##### Searching for entries on master1...') entries = topology.master1.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)') assert 10 == len(entries) log.info('##### Searching for entries on master2...') entries = topology.master2.search_s(DEFAULT_SUFFIX, ldap.SCOPE_SUBTREE, '(uid=*)') > assert 10 == len(entries) E assert 10 == 5 E + where 5 = len([dn: uid=m2user0,dc=example,dc=com\ncn: master2 user0\nobjectClass: top\nobjectClass: person\nobjectClass: extensibleObjec...er2 user4\nobjectClass: top\nobjectClass: person\nobjectClass: extensibleObject\nsn: user4\nuid: uid=m2user4\nuid: m2user4\n\n]) tickets/ticket47536_test.py:494: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ('Update succeeded: status ', '0 Total update succeeded') ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}K5tzYTZnFrMEK/x52hThR4iWTzvVSiDHxIQvFhEmwIhq4YciL9UKq6yJb0Or15Vb1yuwdNP5uGlfiK56adL1wuNxnFX3w8lU INFO:lib389:List backend with suffix=dc=exa
mple,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}yV0SJdxZdeu4gT3YKPyKCMtGrBD7EbizR0JsgsRg6XYKVyQVVykD6aAkBre3sS0j20zFutsc7o7VGYBhD+m3OrKNNh7IOKCj DEBUG:tickets.ticket47536_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:tickets.ticket47536_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created INFO:lib389:Starting total init cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:tickets.ticket47536_test:Replication is working. ----------------------------- Captured stdout call ----------------------------- Is this a CA certificate [y/N]? Enter the path length constraint, enter to skip [<0 for unlimited path]: > Is this a critical extension [y/N]? pk12util: PKCS12 EXPORT SUCCESSFUL pk12util: PKCS12 IMPORT SUCCESSFUL ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47536_test:Ticket 47536 - Allow usage of OpenLDAP libraries that don't use NSS for crypto INFO:tickets.ticket47536_test: ######################### Creating SSL Keys and Certs ###################### INFO:tickets.ticket47536_test:##### shutdown master1 INFO:tickets.ticket47536_test:##### Creating a password file INFO:tickets.ticket47536_test:##### create the pin file INFO:tickets.ticket47536_test:##### Creating a noise file INFO:tickets.ticket47536_test:##### Create key3.db and cert8.db database (master1): ['certutil', '-N', '-d', '/etc/dirsrv/slapd-master_1', '-f', '/etc/dirsrv/slapd-master_1/pwdfile.txt'] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating encryption key for CA (master1): ['certutil', '-G', '-d', '/etc/dirsrv/slapd-master_1', '-z', '/etc/dirsrv/slapd-master_1/noise.txt', '-f', '/etc/dirsrv/slapd-master_1/pwdfile.txt'
] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating self-signed CA certificate (master1) -- nickname CAcertificate Generating key. This may take a few moments... INFO:tickets.ticket47536_test:##### Creating Server certificate -- nickname Server-Cert1: ['certutil', '-S', '-n', 'Server-Cert1', '-s', 'CN=localhost.localdomain,OU=389 Directory Server', '-c', 'CAcertificate', '-t', ',,', '-m', '1001', '-v', '120', '-d', '/etc/dirsrv/slapd-master_1', '-z', '/etc/dirsrv/slapd-master_1/noise.txt', '-f', '/etc/dirsrv/slapd-master_1/pwdfile.txt'] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Creating Server certificate -- nickname Server-Cert2: ['certutil', '-S', '-n', 'Server-Cert2', '-s', 'CN=localhost.localdomain,OU=390 Directory Server', '-c', 'CAcertificate', '-t', ',,', '-m', '1002', '-v', '120', '-d', '/etc/dirsrv/slapd-master_1', '-z', '/etc/dirsrv/slapd-master_1/noise.txt', '-f', '/etc/dirsrv/slapd-master_1/pwdfile.txt'] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### start master1 INFO:tickets.ticket47536_test:##### enable SSL in master1 with all ciphers INFO:tickets.ticket47536_test: ######################### Enabling SSL LDAPSPORT 41636 ###################### INFO:tickets.ticket47536_test:##### Check the cert db: ['certutil', '-L', '-d', '/etc/dirsrv/slapd-master_1'] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: INFO:tickets.ticket47536_test: Certificate Nickname Trust Attributes INFO:tickets.ticket47536_test: SSL,S/MIME,JAR/XPI INFO:tickets.ticket47536_test: INFO:tickets.ticket47536_test: CAcertificate CTu,u,u INFO:tickets.ticket47536_test: Server-Cert2 u,u,u INFO:tickets.ticket47536_test: Server-Cert1 u,u,u INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test:##### Check PEM files of master1 (before setting nsslapd-extract-pemfiles INFO:ti
ckets.ticket47536_test: ######################### Check PEM files (CAcertificate, Server-Cert1, Server-Cert1-Key) not in /etc/dirsrv/slapd-master_1 ###################### INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/CAcertificate.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1-Key.pem is correctly not generated. INFO:tickets.ticket47536_test:##### Set on to nsslapd-extract-pemfiles INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test:##### Check PEM files of master1 (after setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: ######################### Check PEM files (CAcertificate, Server-Cert1, Server-Cert1-Key) in /etc/dirsrv/slapd-master_1 ###################### INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/CAcertificate.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_1/Server-Cert1-Key.pem is successfully generated. INFO:tickets.ticket47536_test:##### Extract PK12 file for master2: pk12util -o /tmp/Server-Cert2.pk12 -n "Server-Cert2" -d /etc/dirsrv/slapd-master_1 -w /etc/dirsrv/slapd-master_1/pwdfile.txt -k /etc/dirsrv/slapd-master_1/pwdfile.txt INFO:tickets.ticket47536_test:##### Check PK12 files INFO:tickets.ticket47536_test:/tmp/Server-Cert2.pk12 is successfully extracted. INFO:tickets.ticket47536_test:##### stop master2 INFO:tickets.ticket47536_test:##### Initialize Cert DB for master2 INFO:tickets.ticket47536_test:##### Create key3.db and cert8.db database (master2): ['certutil', '-N', '-d', '/etc/dirsrv/slapd-master_2', '-f', '/etc/dirsrv/slapd-master_1/pwdfile.txt'] INFO:tickets.ticket47536_test: OUT: INFO:tickets.ticket47536_test: ERR: INFO:tickets.ticket47536_test:##### Import certs to master2 INFO:tickets.ticket47536_test:Importing CAcertificate IN
FO:tickets.ticket47536_test:##### Importing Server-Cert2 to master2: pk12util -i /tmp/Server-Cert2.pk12 -n "Server-Cert2" -d /etc/dirsrv/slapd-master_2 -w /etc/dirsrv/slapd-master_1/pwdfile.txt -k /etc/dirsrv/slapd-master_1/pwdfile.txt INFO:tickets.ticket47536_test:copy /etc/dirsrv/slapd-master_1/pin.txt to /etc/dirsrv/slapd-master_2/pin.txt INFO:tickets.ticket47536_test:##### start master2 INFO:tickets.ticket47536_test:##### enable SSL in master2 with all ciphers INFO:tickets.ticket47536_test: ######################### Enabling SSL LDAPSPORT 42636 ###################### INFO:tickets.ticket47536_test:##### restart master2 INFO:tickets.ticket47536_test:##### Check PEM files of master2 (before setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: ######################### Check PEM files (CAcertificate, Server-Cert2, Server-Cert2-Key) not in /etc/dirsrv/slapd-master_2 ###################### INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/CAcertificate.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2.pem is correctly not generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2-Key.pem is correctly not generated. INFO:tickets.ticket47536_test:##### Set on to nsslapd-extract-pemfiles INFO:tickets.ticket47536_test:##### restart master2 INFO:tickets.ticket47536_test:##### Check PEM files of master2 (after setting nsslapd-extract-pemfiles INFO:tickets.ticket47536_test: ######################### Check PEM files (CAcertificate, Server-Cert2, Server-Cert2-Key) in /etc/dirsrv/slapd-master_2 ###################### INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/CAcertificate.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2.pem is successfully generated. INFO:tickets.ticket47536_test:/etc/dirsrv/slapd-master_2/Server-Cert2-Key.pem is successfully generated. INFO:tickets.ticket47536_test:##### restart master1 INFO:tickets.ticket47536_test: ######################### Creating
 SSL Keys and Certs Done ###################### INFO:tickets.ticket47536_test:######################### Configure SSL/TLS agreements ###################### INFO:tickets.ticket47536_test:######################## master1 -- startTLS -> master2 ##################### INFO:tickets.ticket47536_test:##################### master1 <- tls_clientAuth -- master2 ################## INFO:tickets.ticket47536_test:##### Update the agreement of master1 INFO:tickets.ticket47536_test:##### Add the cert to the repl manager on master1 INFO:tickets.ticket47536_test:##### master2 Server Cert in base64 format: MIICyjCCAbKgAwIBAgICA+owDQYJKoZIhvcNAQELBQAwETEPMA0GA1UEAxMGQ0FjZXJ0MB4XDTE2MTAyNjIyMzYyNVoXDTI2MTAyNjIyMzYyNVowPzEdMBsGA1UECxMUMzkwIERpcmVjdG9yeSBTZXJ2ZXIxHjAcBgNVBAMTFWxvY2FsaG9zdC5sb2NhbGRvbWFpbjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALC7qxyr+VzolngwPavncqGfwub2xscF3soJRI5DD9qGWUubKTzQpmXST0gjC8vpSJK/nY1w07DgeDYgpQX9u7zdEU+DAvSiT+6TQJjEbEtZieeWMe2EKpNkVWBP/uWepMnWJK+SIp4j58ZpthEfvU0xGRLxizCxLqYoAMH3/v9Lx9XbryrcAdyCkUn81n1KffA90LoD5nnElG4fM+urH+pHdsTSdJrekb8+XGlACDYKEdd2idAZEKeYGuU0jc9CpEaps+cTHHg593kRan+I6+BzrpMEu9Q3vlrVCITvNBbOGMrCkxbbr9QrcKYpFSmac7Pu/b95b/Gg/DdClPMmd/cCAwEAATANBgkqhkiG9w0BAQsFAAOCAQEAjCE+zgRBx+EJQIlwCGRqj1fTDztniK8anYTAurlrWqrbFaXTVx+2Es021CiYIgm8+Yca+8bCpiRbixtdQo6sPdKCtDVoJQq9FfzicU4KEk99djvZKvjD0HUyOhlc6VNLEAm6aKqKPOwpaJdvQ0Gfc3MYr1cE8MqWsV/pRikGtFiP7OPHTC/ObzwCkaOMq1yKwQAJu+MBYXVue8C+nbIl6IRq3mF3LVG7t98wRUNYRoAjm9lEK6YAE5OTeQZ6XGp1QkTN2stmXLOWlXLQczye47RZB+0J4VizaJWY9Gk9Lz1XPTwa0/SSBucsSs+NM5Vq8x6GI5XsxvHm6clRWNl96w== INFO:tickets.ticket47536_test:##### Replication manager on master1: cn=replrepl,cn=config INFO:tickets.ticket47536_test: ObjectClass: INFO:tickets.ticket47536_test: : top INFO:tickets.ticket47536_test: : person INFO:tickets.ticket47536_test:##### Modify the certmap.conf on master1 INFO:tickets.ticket47536_test:##### Update the agreement of master2 INFO:tickets.ticket47536_test: ######################### Configure SSL/TLS agreements Done ###################### INFO:tickets.ticket47536_test
: ######################### Adding 5 entries to master1 ###################### INFO:tickets.ticket47536_test: ######################### Adding 5 entries to master2 ###################### INFO:tickets.ticket47536_test:##### Searching for entries on master1... INFO:tickets.ticket47536_test:##### Searching for entries on master2... ____________________________ test_ticket47619_init _____________________________ topology = Master[localhost.localdomain:38941] -> Consumer[localhost.localdomain:38961 def test_ticket47619_init(topology): """ Initialize the test environment """ topology.master.plugins.enable(name=PLUGIN_RETRO_CHANGELOG) #topology.master.plugins.enable(name=PLUGIN_MEMBER_OF) #topology.master.plugins.enable(name=PLUGIN_REFER_INTEGRITY) topology.master.stop(timeout=10) topology.master.start(timeout=10) topology.master.log.info("test_ticket47619_init topology %r" % (topology)) # the test case will check if a warning message is logged in the # error log of the supplier > topology.master.errorlog_file = open(topology.master.errlog, "r") E IOError: [Errno 2] No such file or directory: '/var/log/dirsrv/slapd-master_1/error' tickets/ticket47619_test.py:141: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ('Update succeeded: status ', '0 Total update succeeded') ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}dsALidbbI6PEnNGByoEOxdgIeGxVDT8T1P4mM9gqhtxHjMNN9GSnlIiI4BuoaMmg68VOmL++tH767EiSQv4btcawIYKnGOyd INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}uXnyTg
dGovymqsYfwFsUO8rrsEO4SbeRuKlM5tt/DzPtqkj5n5+dI07YfCUXdcWSCAIhid8RqHaWX6eLQoO5IXFyOiRhXqlg DEBUG:tickets.ticket47619_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created INFO:lib389:Starting total init cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:tickets.ticket47619_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:lib389:test_ticket47619_init topology Master[localhost.localdomain:38941] -> Consumer[localhost.localdomain:38961 _____________________________ test_ticket47653_add _____________________________ topology = <tickets.ticket47653MMR_test.TopologyMaster1Master2 object at 0x7f6f543aa610> def test_ticket47653_add(topology): ''' This test ADD an entry on MASTER1 where 47653 is fixed. Then it checks that entry is replicated on MASTER2 (even if on MASTER2 47653 is NOT fixed). Then update on MASTER2 and check the update on MASTER1 It checks that, bound as bind_entry, - we can not ADD an entry without the proper SELFDN aci. - with the proper ACI we can not ADD with 'member' attribute - with the proper ACI and 'member' it succeeds to ADD ''' topology.master1.log.info("\n\n######################### ADD ######################\n") # bind as bind_entry topology.master1.log.info("Bind as %s" % BIND_DN) topology.master1.simple_bind_s(BIND_DN, BIND_PW) # Prepare the entry with multivalued members entry_with_members = Entry(ENTRY_DN) entry_with_members.setValues('objectclass', 'top', 'person', 'OCticket47653') entry_with_members.setValues('sn', ENTRY_NAME) entry_with_members.setValues('cn', ENTRY_NAME) entry_with_members.setValues('postalAddress', 'here') entry_with_members.setValues('postalCode', '1234') members = [] for cpt in range(MAX_OTHERS): name = "%s%d" % (OTHER_NAME, cpt) members.append("cn=%s,%s" % (name, SUFFIX)) members.append(BIND_DN) entry_with_members.setValues('member', members) # Prepare the entry with only one member value entry_with_member = Entry(ENTRY
_DN) entry_with_member.setValues('objectclass', 'top', 'person', 'OCticket47653') entry_with_member.setValues('sn', ENTRY_NAME) entry_with_member.setValues('cn', ENTRY_NAME) entry_with_member.setValues('postalAddress', 'here') entry_with_member.setValues('postalCode', '1234') member = [] member.append(BIND_DN) entry_with_member.setValues('member', member) # entry to add WITH member being BIND_DN but WITHOUT the ACI -> ldap.INSUFFICIENT_ACCESS try: topology.master1.log.info("Try to add Add %s (aci is missing): %r" % (ENTRY_DN, entry_with_member)) topology.master1.add_s(entry_with_member) except Exception as e: topology.master1.log.info("Exception (expected): %s" % type(e).__name__) assert isinstance(e, ldap.INSUFFICIENT_ACCESS) # Ok Now add the proper ACI topology.master1.log.info("Bind as %s and add the ADD SELFDN aci" % DN_DM) topology.master1.simple_bind_s(DN_DM, PASSWORD) ACI_TARGET = "(target = \"ldap:///cn=*,%s\";)" % SUFFIX ACI_TARGETFILTER = "(targetfilter =\"(objectClass=%s)\")" % OC_NAME ACI_ALLOW = "(version 3.0; acl \"SelfDN add\"; allow (add)" ACI_SUBJECT = " userattr = \"member#selfDN\";)" ACI_BODY = ACI_TARGET + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT mod = [(ldap.MOD_ADD, 'aci', ACI_BODY)] topology.master1.modify_s(SUFFIX, mod) time.sleep(1) # bind as bind_entry topology.master1.log.info("Bind as %s" % BIND_DN) topology.master1.simple_bind_s(BIND_DN, BIND_PW) # entry to add WITHOUT member and WITH the ACI -> ldap.INSUFFICIENT_ACCESS try: topology.master1.log.info("Try to add Add %s (member is missing)" % ENTRY_DN) topology.master1.add_s(Entry((ENTRY_DN, { 'objectclass': ENTRY_OC.split(), 'sn': ENTRY_NAME, 'cn': ENTRY_NAME, 'postalAddress': 'here', 'postalCode': '1234'}))) except Exception as e: topology.master1.log.info("Exception (expected): %s" % type(e).__name__) assert isinstance(e, ldap.INSUFFICIENT_ACCESS) # entry to add WITH memberS and WITH the ACI -> ldap.INSUFFICIENT_ACCESS # member should contain only one value try: topology.master1.log.info("Try to add Add %s (with several member valu
es)" % ENTRY_DN) topology.master1.add_s(entry_with_members) except Exception as e: topology.master1.log.info("Exception (expected): %s" % type(e).__name__) assert isinstance(e, ldap.INSUFFICIENT_ACCESS) topology.master1.log.info("Try to add Add %s should be successful" % ENTRY_DN) try: topology.master1.add_s(entry_with_member) except ldap.LDAPError as e: topology.master1.log.info("Failed to add entry, error: " + e.message['desc']) > assert False E assert False tickets/ticket47653MMR_test.py:305: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ######################### ADD ###################### INFO:lib389:Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com (aci is missing): dn: cn=test_entry, dc=example,dc=com cn: test_entry member: cn=bind_entry, dc=example,dc=com objectclass: top objectclass: person objectclass: OCticket47653 postalAddress: here postalCode: 1234 sn: test_entry INFO:lib389:Exception (expected): INSUFFICIENT_ACCESS INFO:lib389:Bind as cn=Directory Manager and add the ADD SELFDN aci INFO:lib389:Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com (member is missing) INFO:lib389:Exception (expected): INSUFFICIENT_ACCESS INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com (with several member values) INFO:lib389:Exception (expected): INSUFFICIENT_ACCESS INFO:lib389:Try to add Add cn=test_entry, dc=example,dc=com should be successful INFO:lib389:Failed to add entry, error: Insufficient access ___________________________ test_ticket47653_modify ____________________________ topology = <tickets.ticket47653MMR_test.TopologyMaster1Master2 object at 0x7f6f543aa610> def test_ticket47653_modify(topology): ''' This test MOD an entry on MASTER1 where 47653 is fixed. Then it checks that update is replicated on MASTER2 (even if on MASTER2 47653 is NOT fixed). Then update on MASTER2 (bound as BIND_DN). This update may fail whether or not 47653 is fi
xed on MASTER2 It checks that, bound as bind_entry, - we can not modify an entry without the proper SELFDN aci. - adding the ACI, we can modify the entry ''' # bind as bind_entry topology.master1.log.info("Bind as %s" % BIND_DN) topology.master1.simple_bind_s(BIND_DN, BIND_PW) topology.master1.log.info("\n\n######################### MODIFY ######################\n") # entry to modify WITH member being BIND_DN but WITHOUT the ACI -> ldap.INSUFFICIENT_ACCESS try: topology.master1.log.info("Try to modify %s (aci is missing)" % ENTRY_DN) mod = [(ldap.MOD_REPLACE, 'postalCode', '9876')] topology.master1.modify_s(ENTRY_DN, mod) except Exception as e: topology.master1.log.info("Exception (expected): %s" % type(e).__name__) assert isinstance(e, ldap.INSUFFICIENT_ACCESS) # Ok Now add the proper ACI topology.master1.log.info("Bind as %s and add the WRITE SELFDN aci" % DN_DM) topology.master1.simple_bind_s(DN_DM, PASSWORD) ACI_TARGET = "(target = \"ldap:///cn=*,%s\";)" % SUFFIX ACI_TARGETATTR = "(targetattr = *)" ACI_TARGETFILTER = "(targetfilter =\"(objectClass=%s)\")" % OC_NAME ACI_ALLOW = "(version 3.0; acl \"SelfDN write\"; allow (write)" ACI_SUBJECT = " userattr = \"member#selfDN\";)" ACI_BODY = ACI_TARGET + ACI_TARGETATTR + ACI_TARGETFILTER + ACI_ALLOW + ACI_SUBJECT mod = [(ldap.MOD_ADD, 'aci', ACI_BODY)] topology.master1.modify_s(SUFFIX, mod) time.sleep(1) # bind as bind_entry topology.master1.log.info("M1: Bind as %s" % BIND_DN) topology.master1.simple_bind_s(BIND_DN, BIND_PW) # modify the entry and checks the value topology.master1.log.info("M1: Try to modify %s. It should succeeds" % ENTRY_DN) mod = [(ldap.MOD_REPLACE, 'postalCode', '1928')] > topology.master1.modify_s(ENTRY_DN, mod) tickets/ticket47653MMR_test.py:387: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:402: in modify_s return self.result(msgid,all=1,timeout=self.timeout) ../../../lib389/lib389
/__init__.py:127: in inner objtype, data = f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f537f90e0> func = <built-in method result4 of LDAP object at 0x7f6f54393198> args = (37, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = INSUFFICIENT_ACCESS({'desc': 'Insufficient access'},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E INSUFFICIENT_ACCESS: {'desc': 'Insufficient access'} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: INSUFFICIENT_ACCESS ----------------------------- Captured stderr call ----------------------------- INFO:lib389:Bind as cn=bind_entry, dc=exampl
e,dc=com INFO:lib389: ######################### MODIFY ###################### INFO:lib389:Try to modify cn=test_entry, dc=example,dc=com (aci is missing) INFO:lib389:Exception (expected): INSUFFICIENT_ACCESS INFO:lib389:Bind as cn=Directory Manager and add the WRITE SELFDN aci INFO:lib389:M1: Bind as cn=bind_entry, dc=example,dc=com INFO:lib389:M1: Try to modify cn=test_entry, dc=example,dc=com. It should succeeds ____________________________ test_ticket47669_init _____________________________ topology = <tickets.ticket47669_test.TopologyStandalone object at 0x7f6f538de490> def test_ticket47669_init(topology): """ Add cn=changelog5,cn=config Enable cn=Retro Changelog Plugin,cn=plugins,cn=config """ log.info('Testing Ticket 47669 - Test duration syntax in the changelogs') # bind as directory manager topology.standalone.log.info("Bind as %s" % DN_DM) topology.standalone.simple_bind_s(DN_DM, PASSWORD) try: changelogdir = "%s/changelog" % topology.standalone.dbdir topology.standalone.add_s(Entry((CHANGELOG, {'objectclass': 'top extensibleObject'.split(), 'nsslapd-changelogdir': changelogdir}))) except ldap.LDAPError as e: log.error('Failed to add ' + CHANGELOG + ': error ' + e.message['desc']) assert False try: topology.standalone.modify_s(RETROCHANGELOG, [(ldap.MOD_REPLACE, 'nsslapd-pluginEnabled', 'on')]) except ldap.LDAPError as e: log.error('Failed to enable ' + RETROCHANGELOG + ': error ' + e.message['desc']) assert False # restart the server > topology.standalone.restart(timeout=10) tickets/ticket47669_test.py:103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:1215: in restart self.start(timeout) ../../../lib389/lib389/__init__.py:1096: in start "dirsrv@%s" % self.serverid]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = (['/usr/bin/systemctl', 'start', 'dirsrv@standalone'],), kwargs = {} retcode = 1, cmd = ['/usr/bin/systemctl', 'start', 'dirsrv@standalone'] def check_call(*popenargs, **kwargs): ""
"Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: check_call(["ls", "-l"]) """ retcode = call(*popenargs, **kwargs) if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] > raise CalledProcessError(retcode, cmd) E CalledProcessError: Command '['/usr/bin/systemctl', 'start', 'dirsrv@standalone']' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:Testing Ticket 47669 - Test duration syntax in the changelogs INFO:lib389:Bind as cn=Directory Manager Job for dirsrv@standalone.service failed because the control process exited with error code. See "systemctl status dirsrv@standalone.service" and "journalctl -xe" for details. ______________________ test_ticket47669_changelog_maxage _______________________ topology = <tickets.ticket47669_test.TopologyStandalone object at 0x7f6f538de490> def test_ticket47669_changelog_maxage(topology): """ Test nsslapd-changelogmaxage in cn=changelog5,cn=config """ log.info('1. Test nsslapd-changelogmaxage in cn=changelog5,cn=config') # bind as directory manager topology.standalone.log.info("Bind as %s" % DN_DM) > topology.standalone.simple_bind_s(DN_DM, PASSWORD) tickets/ticket47669_test.py:159: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:223: in simple_bind_s resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all=1,timeout=self.timeout) ../../../lib389/lib389/__init__.py:159: in inner return
 f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f537a7fc8> func = <built-in method result4 of LDAP object at 0x7f6f54487f58> args = (13, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:1. Test nsslapd-changelogmaxage in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager ___________________ test_ticket47669_changelog_triminterval ____________________ topology = <tickets.ticket47669_test.TopologyStandalone object at 0x7f6f538de490> def test_ticket47669_changelog_triminterval(topology): """ Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config """ log.info('2. Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config') # bind as directory manager topology
.standalone.log.info("Bind as %s" % DN_DM) > topology.standalone.simple_bind_s(DN_DM, PASSWORD) tickets/ticket47669_test.py:179: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:222: in simple_bind_s msgid = self.simple_bind(who,cred,serverctrls,clientctrls) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls)) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f537a7fc8> func = <built-in method simple_bind of LDAP object at 0x7f6f54487f58> args = ('cn=Directory Manager', 'password', None, None), kwargs = {} diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:2. Test nsslapd-changelogtrim-interval in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager _________________ test_ticket47669_changelog_compa
ctdbinterval _________________ topology = <tickets.ticket47669_test.TopologyStandalone object at 0x7f6f538de490> def test_ticket47669_changelog_compactdbinterval(topology): """ Test nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config """ log.info('3. Test nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config') # bind as directory manager topology.standalone.log.info("Bind as %s" % DN_DM) > topology.standalone.simple_bind_s(DN_DM, PASSWORD) tickets/ticket47669_test.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:222: in simple_bind_s msgid = self.simple_bind(who,cred,serverctrls,clientctrls) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls)) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f537a7fc8> func = <built-in method simple_bind of LDAP object at 0x7f6f54487f58> args = ('cn=Directory Manager', 'password', None, None), kwargs = {} diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': 
"Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:3. Test nsslapd-changelogcompactdb-interval in cn=changelog5,cn=config INFO:lib389:Bind as cn=Directory Manager ____________________ test_ticket47669_retrochangelog_maxage ____________________ topology = <tickets.ticket47669_test.TopologyStandalone object at 0x7f6f538de490> def test_ticket47669_retrochangelog_maxage(topology): """ Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config """ log.info('4. Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config') # bind as directory manager topology.standalone.log.info("Bind as %s" % DN_DM) > topology.standalone.simple_bind_s(DN_DM, PASSWORD) tickets/ticket47669_test.py:219: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:222: in simple_bind_s msgid = self.simple_bind(who,cred,serverctrls,clientctrls) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:216: in simple_bind return self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls)) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f537a7fc8> func = <built-in method simple_bind of LDAP object at 0x7f6f54487f58> args = ('cn=Directory Manager', 'password', None, None), kwargs = {} diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __de
bug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket47669_test:4. Test nsslapd-changelogmaxage in cn=Retro Changelog Plugin,cn=plugins,cn=config INFO:lib389:Bind as cn=Directory Manager ____________________________ test_ticket47823_init _____________________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_init(topology): """ """ # Enabled the plugins topology.standalone.plugins.enable(name=PLUGIN_ATTR_UNIQUENESS) topology.standalone.restart(timeout=120) topology.standalone.add_s(Entry((PROVISIONING_DN, {'objectclass': "top nscontainer".split(), 'cn': PROVISIONING_CN}))) topology.standalone.add_s(Entry((ACTIVE_DN, {'objectclass': "top nscontainer".split(), 'cn': ACTIVE_CN}))) topology.standalone.add_s(Entry((STAGE_DN, {'objectclass': "top nscontainer".split(), 'cn': STAGE_CN}))) topology.standalone.add_s(Entry((DELETE_DN, {'objectclass': "top nscontainer".split(), 'cn': DELETE_CN}))) > topology.standalone.errorlog_file = open(topology.standalone.errlog, "r") E IOError: [Errno 2] No such file or directory: '/var/log/dirsrv/slapd-standalone/error' tickets/ticket47823_test.py:477: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ______________________ test_ticket47823_invalid_config_1 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_1(topology): ''' Chec
k that an invalid config is detected. No uniqueness enforced Using old config: arg0 is missing ''' _header(topology, "Invalid config (old): arg0 is missing") _config_file(topology, action='save') # create an invalid config without arg0 config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='old', across_subtrees=False) del config.data['nsslapd-pluginarg0'] # replace 'cn' uniqueness entry try: topology.standalone.delete_s(config.dn) except ldap.NO_SUCH_OBJECT: pass topology.standalone.add_s(config) topology.standalone.getEntry(config.dn, ldap.SCOPE_BASE, "(objectclass=nsSlapdPlugin)", ALL_CONFIG_ATTRS) # Check the server did not restart topology.standalone.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, 'nsslapd-errorlog-level', '65536')]) try: > topology.standalone.restart(timeout=5) tickets/ticket47823_test.py:636: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../lib389/lib389/__init__.py:1215: in restart self.start(timeout) ../../../lib389/lib389/__init__.py:1096: in start "dirsrv@%s" % self.serverid]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = (['/usr/bin/systemctl', 'start', 'dirsrv@standalone'],), kwargs = {} retcode = 1, cmd = ['/usr/bin/systemctl', 'start', 'dirsrv@standalone'] def check_call(*popenargs, **kwargs): """Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: check_call(["ls", "-l"]) """ retcode = call(*popenargs, **kwargs) if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] > raise CalledProcessError(retcode, cmd) E CalledProcessError: Command '['/usr/bin/systemctl', 'start', 'dirsrv@standalone']' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ----------------------------- Captured s
tderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (old): arg0 is missing INFO:lib389:####### INFO:lib389:############################################### Job for dirsrv@standalone.service failed because the control process exited with error code. See "systemctl status dirsrv@standalone.service" and "journalctl -xe" for details. ______________________ test_ticket47823_invalid_config_2 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_2(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg1 is missing ''' _header(topology, "Invalid config (old): arg1 is missing") _config_file(topology, action='save') # create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='old', across_subtrees=False) tickets/ticket47823_test.py:672: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1574: in getEntry restype, obj = self.result(res) ../../../lib389/lib389/__init__.py:127: in inner objtype, data = f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:503: in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:507: in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:514: in result3 resp_ctrl_classes=resp_ctrl_classes ../../.
./lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:521: in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method result4 of LDAP object at 0x7f6f54487b20> args = (15, 1, -1, 0, 0, 0), kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (old): arg1 is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_3 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_3(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg0 is missing ''' _header(topology, "Invalid config (old): arg0 is missing but new config attrname exists") _config_file(topology, action='save') 
# create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='old', across_subtrees=False) tickets/ticket47823_test.py:723: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1573: in getEntry res = self.search(*args, **kwargs) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method search_ext of LDAP object at 0x7f6f54487b20> args = ('cn=attribute uniqueness,cn=plugins,cn=config', 0, '(objectclass=nsSlapdPlugin)', ['objectClass', 'cn', 'nsslapd-pluginPath', 'nsslapd-pluginInitfunc', 'nsslapd-pluginType', 'nsslapd-pluginEnabled', ...], 0, None, ...) kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None t
ry: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (old): arg0 is missing but new config attrname exists INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_4 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_4(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using old config: arg1 is missing ''' _header(topology, "Invalid config (old): arg1 is missing but new config exist") _config_file(topology, action='save') # create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='old', across_subtrees=False) tickets/ticket47823_test.py:776: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:124: in _build_config config = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1573: in getEntry res = self.search(*args, **kwargs) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <
lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method search_ext of LDAP object at 0x7f6f54487b20> args = ('cn=attribute uniqueness,cn=plugins,cn=config', 0, '(objectclass=nsSlapdPlugin)', ['objectClass', 'cn', 'nsslapd-pluginPath', 'nsslapd-pluginInitfunc', 'nsslapd-pluginType', 'nsslapd-pluginEnabled', ...], 0, None, ...) kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (old): arg1 is missing but new config exist INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_5 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_5(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using new config: uniqueness-attribute-name is missing ''' _header(topology, "Invalid config (new): uniqueness-attribute-name is missing") _config_file(topology, action='save') # create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='new', across_subtrees=False) t
ickets/ticket47823_test.py:828: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config config = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1573: in getEntry res = self.search(*args, **kwargs) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method search_ext of LDAP object at 0x7f6f54487b20> args = ('cn=attribute uniqueness,cn=plugins,cn=config', 0, '(objectclass=nsSlapdPlugin)', ['objectClass', 'cn', 'nsslapd-pluginPath', 'nsslapd-pluginInitfunc', 'nsslapd-pluginType', 'nsslapd-pluginEnabled', ...], 0, None, ...) kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ------
----------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (new): uniqueness-attribute-name is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_6 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_6(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using new config: uniqueness-subtrees is missing ''' _header(topology, "Invalid config (new): uniqueness-subtrees is missing") _config_file(topology, action='save') # create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1=ACTIVE_DN, subtree_2=None, type_config='new', across_subtrees=False) tickets/ticket47823_test.py:879: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config config = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1573: in getEntry res = self.search(*args, **kwargs) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method search_ext of LDAP object at 0x7f6f54487b20> args = ('cn=attribute uniqueness,cn=plugins,cn=config', 0, '(
objectclass=nsSlapdPlugin)', ['objectClass', 'cn', 'nsslapd-pluginPath', 'nsslapd-pluginInitfunc', 'nsslapd-pluginType', 'nsslapd-pluginEnabled', ...], 0, None, ...) kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib389:####### Invalid config (new): uniqueness-subtrees is missing INFO:lib389:####### INFO:lib389:############################################### ______________________ test_ticket47823_invalid_config_7 _______________________ topology = <tickets.ticket47823_test.TopologyStandalone object at 0x7f6f54202d90> def test_ticket47823_invalid_config_7(topology): ''' Check that an invalid config is detected. No uniqueness enforced Using new config: uniqueness-subtrees is missing ''' _header(topology, "Invalid config (new): uniqueness-subtrees are invalid") _config_file(topology, action='save') # create an invalid config without arg0 > config = _build_config(topology, attr_name='cn', subtree_1="this_is dummy DN", subtree_2="an other=dummy DN", type_config='new', across_subtrees=False) tickets/ticket47823_test.py:930: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tickets/ticket47823_test.py:131: in _build_config con
fig = _uniqueness_config_entry(topology, attr_name) tickets/ticket47823_test.py:112: in _uniqueness_config_entry 'nsslapd-pluginDescription']) ../../../lib389/lib389/__init__.py:1573: in getEntry res = self.search(*args, **kwargs) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:594: in search return self.search_ext(base,scope,filterstr,attrlist,attrsonly,None,None) ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:586: in search_ext timeout,sizelimit, ../../../lib389/lib389/__init__.py:159: in inner return f(*args, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <lib389.DirSrv instance at 0x7f6f541ec830> func = <built-in method search_ext of LDAP object at 0x7f6f54487b20> args = ('cn=attribute uniqueness,cn=plugins,cn=config', 0, '(objectclass=nsSlapdPlugin)', ['objectClass', 'cn', 'nsslapd-pluginPath', 'nsslapd-pluginInitfunc', 'nsslapd-pluginType', 'nsslapd-pluginEnabled', ...], 0, None, ...) kwargs = {}, diagnostic_message_success = None e = SERVER_DOWN({'desc': "Can't contact LDAP server"},) def _ldap_call(self,func,*args,**kwargs): """ Wrapper method mainly for serializing calls into OpenLDAP libs and trace logs """ self._ldap_object_lock.acquire() if __debug__: if self._trace_level>=1: self._trace_file.write('*** %s %s - %s\n%s\n' % ( repr(self), self._uri, '.'.join((self.__class__.__name__,func.__name__)), pprint.pformat((args,kwargs)) )) if self._trace_level>=9: traceback.print_stack(limit=self._trace_stack_limit,file=self._trace_file) diagnostic_message_success = None try: try: > result = func(*args,**kwargs) E SERVER_DOWN: {'desc': "Can't contact LDAP server"} /usr/lib64/python2.7/site-packages/ldap/ldapobject.py:106: SERVER_DOWN ----------------------------- Captured stderr call ----------------------------- INFO:lib389: ############################################### INFO:lib389:####### INFO:lib3
89:####### Invalid config (new): uniqueness-subtrees are invalid INFO:lib389:####### INFO:lib389:############################################### ____________________________ test_ticket47871_init _____________________________ topology = Master[localhost.localdomain:38941] -> Consumer[localhost.localdomain:38961 def test_ticket47871_init(topology): """ Initialize the test environment """ topology.master.plugins.enable(name=PLUGIN_RETRO_CHANGELOG) mod = [(ldap.MOD_REPLACE, 'nsslapd-changelogmaxage', "10s"), # 10 second triming (ldap.MOD_REPLACE, 'nsslapd-changelog-trim-interval', "5s")] topology.master.modify_s("cn=%s,%s" % (PLUGIN_RETRO_CHANGELOG, DN_PLUGIN), mod) #topology.master.plugins.enable(name=PLUGIN_MEMBER_OF) #topology.master.plugins.enable(name=PLUGIN_REFER_INTEGRITY) topology.master.stop(timeout=10) topology.master.start(timeout=10) topology.master.log.info("test_ticket47871_init topology %r" % (topology)) # the test case will check if a warning message is logged in the # error log of the supplier > topology.master.errorlog_file = open(topology.master.errlog, "r") E IOError: [Errno 2] No such file or directory: '/var/log/dirsrv/slapd-master_1/error' tickets/ticket47871_test.py:147: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists OK group dirsrv exists OK user dirsrv exists ('Update succeeded: status ', '0 Total update succeeded') ---------------------------- Captured stderr setup ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseudo user userPassword: {SSHA512}yt2kYjlt1QPsQNaYRzCgX9MO1Ms2i2J0H8dAj6yPxLRw/5jz7Te8Lwik0aRIBrgw+sZQib0kqyWJaUbezyX5TYNg+OzB7CEw INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Found entry dn: cn=replrepl,cn=config cn: bind dn pseudo user cn: replrepl objectClass: top objectClass: person sn: bind dn pseud
o user userPassword: {SSHA512}+Ndoaaf7S7UwQfb77oMhHN7tB3PniBK6EcFSSJkKkyND2plh8cavb8Oin2TM3wLxAsD32ULnPUpesAujXvpLFi8k91GjgwIp DEBUG:tickets.ticket47871_test:cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created INFO:lib389:Starting total init cn=meTo_$host:$port,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config INFO:tickets.ticket47871_test:Replication is working. ----------------------------- Captured stderr call ----------------------------- INFO:lib389:test_ticket47871_init topology Master[localhost.localdomain:38941] -> Consumer[localhost.localdomain:38961 _______________________________ test_ticket48109 _______________________________ topology = <tickets.ticket48109_test.TopologyStandalone object at 0x7f6f537810d0> def test_ticket48109(topology): ''' Set SubStr lengths to cn=uid,cn=index,... objectClass: extensibleObject nsIndexType: sub nsSubStrBegin: 2 nsSubStrEnd: 2 ''' log.info('Test case 0') # add substr setting to UID_INDEX try: topology.standalone.modify_s(UID_INDEX, [(ldap.MOD_ADD, 'objectClass', 'extensibleObject'), (ldap.MOD_ADD, 'nsIndexType', 'sub'), (ldap.MOD_ADD, 'nsSubStrBegin', '2'), (ldap.MOD_ADD, 'nsSubStrEnd', '2')]) except ldap.LDAPError as e: log.error('Failed to add substr lengths: error ' + e.message['desc']) assert False # restart the server to apply the indexing topology.standalone.restart(timeout=10) # add a test user UID = 'auser0' USER_DN = 'uid=%s,%s' % (UID, SUFFIX) try: topology.standalone.add_s(Entry((USER_DN, { 'objectclass': 'top person organizationalPerson inetOrgPerson'.split(), 'cn': 'a user0', 'sn': 'user0', 'givenname': 'a', 'mail': UID}))) except ldap.LDAPError as e: log.error('Failed to add ' + USER_DN + ': error ' + e.message['desc']) assert False entries = topology.standalone.search_s(SUFFIX, ldap.SCOPE_SUBTREE, '(uid=a*)') assert len(entries) == 1 # restart the server to check the access log topology.standalone.restart(timeout=10) cmdline = 'egrep %s %s | egrep "uid=a\*"' % (SUFFIX, topology.standalone.accesslog)
 p = os.popen(cmdline, "r") l0 = p.readline() if l0 == "": log.error('Search with "(uid=a*)" is not logged in ' + topology.standalone.accesslog) > assert False E assert False <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48109_test.py>:121: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48109_test:Test case 0 ERROR:tickets.ticket48109_test:Search with "(uid=a*)" is not logged in /var/log/dirsrv/slapd-standalone/access ____________________ test_ticket48266_count_csn_evaluation _____________________ topology = <tickets.ticket48266_test.TopologyReplication object at 0x7f6f4a5efdd0> entries = None def test_ticket48266_count_csn_evaluation(topology, entries): ents = topology.master1.agreement.list(suffix=SUFFIX) assert len(ents) == 1 > first_csn = _get_first_not_replicated_csn(topology) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48266_test.py>:328: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <tickets.ticket48266_test.TopologyReplication object at 0x7f6f4a5efdd0> def _get_first_not_replicated_csn(topology): name = "cn=%s2,%s" % (NEW_ACCOUNT, SUFFIX) # read the first CSN that will not be replicated mod = [(ldap.MOD_REPLACE, 'telephonenumber', str(123456))] topology.master1.modify_s(name, mod) msgid = topology.master1.search_ext(name, ldap.SCOPE_SUBTREE, 'objectclass=*', ['nscpentrywsi']) rtype, rdata, rmsgid = topology.master1.result2(msgid) attrs = None for dn, raw_attrs in rdata: topology.master1.log.info("dn: %s" % dn) if 'nscpentrywsi' in raw_attrs: attrs = raw_attrs['nscpentrywsi'] assert attrs for attr in attrs: if attr.lower().startswith('telephonenumber'): break assert attr # now retrieve the CSN of the operation we are looking f
or csn = None topology.master1.stop(timeout=10) file_path = os.path.join(topology.master1.prefix, "var/log/dirsrv/slapd-%s/access" % topology.master1.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-master_1/access' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48266_test.py>:276: IOError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:dn: cn=new_account2,dc=example,dc=com __________________ test_ticket48270_homeDirectory_indexed_cis __________________ topology = <tickets.ticket48270_test.TopologyStandalone object at 0x7f6f4a5c4b10> def test_ticket48270_homeDirectory_indexed_cis(topology): log.info("\n\nindex homeDirectory in caseIgnoreIA5Match and caseExactIA5Match") try: ent = topology.standalone.getEntry(HOMEDIRECTORY_INDEX, ldap.SCOPE_BASE) except ldap.NO_SUCH_OBJECT: topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX, { 'objectclass': "top nsIndex".split(), 'cn': HOMEDIRECTORY_CN, 'nsSystemIndex': 'false', 'nsIndexType': 'eq'}))) #log.info("attach debugger") #time.sleep(60) IGNORE_MR_NAME='caseIgnoreIA5Match' EXACT_MR_NAME='caseExactIA5Match' mod = [(ldap.MOD_REPLACE, MATCHINGRULE, (IGNORE_MR_NAME, EXACT_MR_NAME))] topology.standalone.modify_s(HOMEDIRECTORY_INDEX, mod) #topology.standalone.stop(timeout=10) log.info("successfully checked that filter with exact mr , a filter with lowercase eq is failing") #assert topology.standalone.db2index(bename=DEFAULT_BENAME, suffixes=None, attrs=['homeDirectory']) #topology.standalone.start(timeout=10) args = {TASK_WAIT: True} topology.standalone.tasks.reindex(suffix=SUFFIX, attrname='homeDirectory', args=args) log.info("Check indexing succeeded with a specified matching rule") file_path = os.path.join(topology.standalone.prefix, "var/log/dirsrv/slapd-%s/errors" % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/
log/dirsrv/slapd-standalone/errors' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48270_test.py>:100: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48270_test: index homeDirectory in caseIgnoreIA5Match and caseExactIA5Match INFO:tickets.ticket48270_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_011943 completed successfully INFO:tickets.ticket48270_test:Check indexing succeeded with a specified matching rule _______________________________ test_ticket48383 _______________________________ topology = <tickets.ticket48383_test.TopologyStandalone object at 0x7f6f4ac309d0> def test_ticket48383(topology): """ This test case will check that we re-alloc buffer sizes on import.c We achieve this by setting the servers dbcachesize to a stupid small value and adding huge objects to ds. Then when we run db2index, either: data stress suites tickets tmp If we are not using the re-alloc code, it will FAIL (Bad) data stress suites tickets tmp If we re-alloc properly, it all works regardless. """ topology.standalone.config.set('nsslapd-maxbersize', '200000000') topology.standalone.restart() # Create some stupid huge objects / attributes in DS. # seeAlso is indexed by default. Lets do that! # This will take a while ... data = [random.choice(string.letters) for x in xrange(10000000)] s = "".join(data) # This was here for an iteration test. i = 1 USER_DN = 'uid=user%s,ou=people,%s' % (i, DEFAULT_SUFFIX) padding = ['%s' % n for n in range(400)] user = Entry((USER_DN, { 'objectclass': 'top posixAccount person extensibleObject'.split(), 'uid': 'user%s' % (i), 'cn': 'user%s' % (i), 'uidNumber': '%s' % (i), 'gidNumber': '%s' % (i), 'homeDirectory': '/home/user%s' % (i), 'description': 'user description', 'sn' : s , 'padding' : padding , })) try: topology.st
andalone.add_s(user) except ldap.LDAPError as e: log.fatal('test 48383: Failed to user%s: error %s ' % (i, e.message['desc'])) assert False # Set the dbsize really low. try: topology.standalone.modify_s(DEFAULT_BENAME, [(ldap.MOD_REPLACE, 'nsslapd-cachememsize', '1')]) except ldap.LDAPError as e: log.fatal('Failed to change nsslapd-cachememsize ' + e.message['desc']) ## Does ds try and set a minimum possible value for this? ## Yes: [16/Feb/2016:16:39:18 +1000] - WARNING: cache too small, increasing to 500K bytes # Given the formula, by default, this means DS will make the buffsize 400k # So an object with a 1MB attribute should break indexing # stop the server topology.standalone.stop(timeout=30) # Now export and import the DB. It's easier than db2index ... topology.standalone.db2ldif(bename=DEFAULT_BENAME, suffixes=[DEFAULT_SUFFIX], excludeSuffixes=[], encrypt=False, \ repl_data=True, outputfile='%s/ldif/%s.ldif' % (topology.standalone.dbdir,SERVERID_STANDALONE )) result = topology.standalone.ldif2db(DEFAULT_BENAME, None, None, False, '%s/ldif/%s.ldif' % (topology.standalone.dbdir,SERVERID_STANDALONE )) > assert(result) E assert False <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48383_test.py>:123: AssertionError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stdout call ----------------------------- OK group dirsrv exists OK user dirsrv exists Exported ldif file: /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- CRITICAL:tickets.ticket48383_test:Failed to change nsslapd-cachememsize No such object INFO:lib389:Running script: /usr/sbin/db2ldif -Z standalone -n userRoot -s dc=example,dc=com -a /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif -r [27/Oct/2016:01:29:23.637729301 +0200] - D
EBUG - ldbm_back_start - userRoot: entry cache size: 10485760 B; db size: 10321920 B [27/Oct/2016:01:29:23.640767399 +0200] - DEBUG - ldbm_back_start - total cache size: 20971520 B; [27/Oct/2016:01:29:23.642689006 +0200] - DEBUG - ldbm_back_start - Total entry cache size: 20971520 B; dbcache size: 10000000 B; available memory size: 2154676224 B; [27/Oct/2016:01:29:23.654871919 +0200] - NOTICE - dblayer_start - Detected Disorderly Shutdown last time Directory Server was running, recovering database. ldiffile: /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif [27/Oct/2016:01:29:24.333710130 +0200] - ERR - ldbm_back_ldbm2ldif - db2ldif: can't open /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif: 2 (No such file or directory) [27/Oct/2016:01:29:24.375901389 +0200] - INFO - dblayer_pre_close - Waiting for 4 database threads to stop [27/Oct/2016:01:29:25.297550610 +0200] - INFO - dblayer_pre_close - All database threads now stopped ERROR:lib389:ldif2db: Can't find file: /var/lib/dirsrv/slapd-standalone/db/ldif/standalone.ldif ___________________ test_ticket48497_homeDirectory_index_run ___________________ topology = <tickets.ticket48497_test.TopologyStandalone object at 0x7f6f4af9b090> def test_ticket48497_homeDirectory_index_run(topology): args = {TASK_WAIT: True} topology.standalone.tasks.reindex(suffix=SUFFIX, attrname='homeDirectory', args=args) log.info("Check indexing succeeded with a specified matching rule") file_path = os.path.join(topology.standalone.prefix, "var/log/dirsrv/slapd-%s/errors" % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-standalone/errors' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48497_test.py>:139: IOError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_012950 completed 
successfully INFO:tickets.ticket48497_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48745_homeDirectory_indexed_cis __________________ topology = <tickets.ticket48745_test.TopologyStandalone object at 0x7f6f4ac2b9d0> def test_ticket48745_homeDirectory_indexed_cis(topology): log.info("\n\nindex homeDirectory in caseIgnoreIA5Match and caseExactIA5Match") try: ent = topology.standalone.getEntry(HOMEDIRECTORY_INDEX, ldap.SCOPE_BASE) except ldap.NO_SUCH_OBJECT: topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX, { 'objectclass': "top nsIndex".split(), 'cn': HOMEDIRECTORY_CN, 'nsSystemIndex': 'false', 'nsIndexType': 'eq'}))) #log.info("attach debugger") #time.sleep(60) IGNORE_MR_NAME='caseIgnoreIA5Match' EXACT_MR_NAME='caseExactIA5Match' mod = [(ldap.MOD_REPLACE, MATCHINGRULE, (IGNORE_MR_NAME, EXACT_MR_NAME))] topology.standalone.modify_s(HOMEDIRECTORY_INDEX, mod) #topology.standalone.stop(timeout=10) log.info("successfully checked that filter with exact mr , a filter with lowercase eq is failing") #assert topology.standalone.db2index(bename=DEFAULT_BENAME, suffixes=None, attrs=['homeDirectory']) #topology.standalone.start(timeout=10) args = {TASK_WAIT: True} topology.standalone.tasks.reindex(suffix=SUFFIX, attrname='homeDirectory', args=args) log.info("Check indexing succeeded with a specified matching rule") file_path = os.path.join(topology.standalone.prefix, "var/log/dirsrv/slapd-%s/errors" % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-standalone/errors' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48745_test.py>:110: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48745_test: index homeDirectory in caseIgnoreIA5Match and caseExactIA5Match INFO:tickets.ticket48745_test:successfully checked that filter with exact mr , a filter with lowercase eq 
is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013109 completed successfully INFO:tickets.ticket48745_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48746_homeDirectory_indexed_cis __________________ topology = <tickets.ticket48746_test.TopologyStandalone object at 0x7f6f53113350> def test_ticket48746_homeDirectory_indexed_cis(topology): log.info("\n\nindex homeDirectory in caseIgnoreIA5Match and caseExactIA5Match") try: ent = topology.standalone.getEntry(HOMEDIRECTORY_INDEX, ldap.SCOPE_BASE) except ldap.NO_SUCH_OBJECT: topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX, { 'objectclass': "top nsIndex".split(), 'cn': HOMEDIRECTORY_CN, 'nsSystemIndex': 'false', 'nsIndexType': 'eq'}))) #log.info("attach debugger") #time.sleep(60) IGNORE_MR_NAME='caseIgnoreIA5Match' EXACT_MR_NAME='caseExactIA5Match' mod = [(ldap.MOD_REPLACE, MATCHINGRULE, (IGNORE_MR_NAME, EXACT_MR_NAME))] topology.standalone.modify_s(HOMEDIRECTORY_INDEX, mod) #topology.standalone.stop(timeout=10) log.info("successfully checked that filter with exact mr , a filter with lowercase eq is failing") #assert topology.standalone.db2index(bename=DEFAULT_BENAME, suffixes=None, attrs=['homeDirectory']) #topology.standalone.start(timeout=10) args = {TASK_WAIT: True} topology.standalone.tasks.reindex(suffix=SUFFIX, attrname='homeDirectory', args=args) log.info("Check indexing succeeded with a specified matching rule") file_path = os.path.join(topology.standalone.prefix, "var/log/dirsrv/slapd-%s/errors" % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-standalone/errors' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48746_test.py>:108: IOError ----------------------------- Captured stderr call ----------------------------- INFO:tickets.ticket48746_test: index homeDirectory in caseIgnoreIA
5Match and caseExactIA5Match INFO:tickets.ticket48746_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013134 completed successfully INFO:tickets.ticket48746_test:Check indexing succeeded with a specified matching rule __________________ test_ticket48746_homeDirectory_indexed_ces __________________ topology = <tickets.ticket48746_test.TopologyStandalone object at 0x7f6f53113350> def test_ticket48746_homeDirectory_indexed_ces(topology): log.info("\n\nindex homeDirectory in caseExactIA5Match, this would trigger the crash") try: ent = topology.standalone.getEntry(HOMEDIRECTORY_INDEX, ldap.SCOPE_BASE) except ldap.NO_SUCH_OBJECT: topology.standalone.add_s(Entry((HOMEDIRECTORY_INDEX, { 'objectclass': "top nsIndex".split(), 'cn': HOMEDIRECTORY_CN, 'nsSystemIndex': 'false', 'nsIndexType': 'eq'}))) # log.info("attach debugger") # time.sleep(60) EXACT_MR_NAME='caseExactIA5Match' mod = [(ldap.MOD_REPLACE, MATCHINGRULE, (EXACT_MR_NAME))] topology.standalone.modify_s(HOMEDIRECTORY_INDEX, mod) #topology.standalone.stop(timeout=10) log.info("successfully checked that filter with exact mr , a filter with lowercase eq is failing") #assert topology.standalone.db2index(bename=DEFAULT_BENAME, suffixes=None, attrs=['homeDirectory']) #topology.standalone.start(timeout=10) args = {TASK_WAIT: True} topology.standalone.tasks.reindex(suffix=SUFFIX, attrname='homeDirectory', args=args) log.info("Check indexing succeeded with a specified matching rule") file_path = os.path.join(topology.standalone.prefix, "var/log/dirsrv/slapd-%s/errors" % topology.standalone.serverid) > file_obj = open(file_path, "r") E IOError: [Errno 2] No such file or directory: '/usr/var/log/dirsrv/slapd-standalone/errors' <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48746_test.py>:172: IOError ----------------------------- Captured stderr call ----
------------------------- INFO:tickets.ticket48746_test: index homeDirectory in caseExactIA5Match, this would trigger the crash INFO:tickets.ticket48746_test:successfully checked that filter with exact mr , a filter with lowercase eq is failing INFO:lib389:List backend with suffix=dc=example,dc=com INFO:lib389:Index task index_homeDirectory_10272016_013136 completed successfully INFO:tickets.ticket48746_test:Check indexing succeeded with a specified matching rule _____________________ test_ticket48906_dblock_ldap_update ______________________ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> def test_ticket48906_dblock_ldap_update(topology): topology.standalone.log.info('###################################') topology.standalone.log.info('###') topology.standalone.log.info('### Check that after ldap update') topology.standalone.log.info('### - monitor contains DEFAULT') topology.standalone.log.info('### - configured contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('### - After stop dse.ldif contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('### - After stop guardian contains DEFAULT') topology.standalone.log.info('### In fact guardian should differ from config to recreate the env') topology.standalone.log.info('### Check that after restart (DBenv recreated)') topology.standalone.log.info('### - monitor contains DBLOCK_LDAP_UPDATE ') topology.standalone.log.info('### - configured contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('### - dse.ldif contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('###') topology.standalone.log.info('###################################') topology.standalone.modify_s(ldbm_config, [(ldap.MOD_REPLACE, DBLOCK_ATTR_CONFIG, DBLOCK_LDAP_UPDATE)]) _check_monitored_value(topology, DBLOCK_DEFAULT) _check_configured_value(topology, attr=DBLOCK_ATTR_CONFIG, expected_value=DBLOCK_LDAP_UPDATE, required=True) topology.standalone.stop(timeout=10) _check_dse_ldif_value(topology, attr=DBLOCK_ATTR_CONFIG, expected_value=DBLOCK_LDAP_UPDA
TE) > _check_guardian_value(topology, attr=DBLOCK_ATTR_GUARDIAN, expected_value=DBLOCK_DEFAULT) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:218: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> attr = 'locks', expected_value = '10000' def _check_guardian_value(topology, attr=DBLOCK_ATTR_CONFIG, expected_value=None): guardian_file = topology.standalone.dbdir + '/db/guardian' > assert(os.path.exists(guardian_file)) E assert <function exists at 0x7f6f64107050>('/var/lib/dirsrv/slapd-standalone/db/db/guardian') E + where <function exists at 0x7f6f64107050> = <module 'posixpath' from '/usr/lib64/python2.7/posixpath.pyc'>.exists E + where <module 'posixpath' from '/usr/lib64/python2.7/posixpath.pyc'> = os.path <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:164: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that after ldap update INFO:lib389:### - monitor contains DEFAULT INFO:lib389:### - configured contains DBLOCK_LDAP_UPDATE INFO:lib389:### - After stop dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### - After stop guardian contains DEFAULT INFO:lib389:### In fact guardian should differ from config to recreate the env INFO:lib389:### Check that after restart (DBenv recreated) INFO:lib389:### - monitor contains DBLOCK_LDAP_UPDATE INFO:lib389:### - configured contains DBLOCK_LDAP_UPDATE INFO:lib389:### - dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### INFO:lib389:################################### _____________________ test_ticket48906_dblock_edit_update ______________________ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> def test_ticket48906_dblock_edit_up
date(topology): topology.standalone.log.info('###################################') topology.standalone.log.info('###') topology.standalone.log.info('### Check that after stop') topology.standalone.log.info('### - dse.ldif contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('### - guardian contains DBLOCK_LDAP_UPDATE') topology.standalone.log.info('### Check that edit dse+restart') topology.standalone.log.info('### - monitor contains DBLOCK_EDIT_UPDATE') topology.standalone.log.info('### - configured contains DBLOCK_EDIT_UPDATE') topology.standalone.log.info('### Check that after stop') topology.standalone.log.info('### - dse.ldif contains DBLOCK_EDIT_UPDATE') topology.standalone.log.info('### - guardian contains DBLOCK_EDIT_UPDATE') topology.standalone.log.info('###') topology.standalone.log.info('###################################') topology.standalone.stop(timeout=10) _check_dse_ldif_value(topology, attr=DBLOCK_ATTR_CONFIG, expected_value=DBLOCK_LDAP_UPDATE) > _check_guardian_value(topology, attr=DBLOCK_ATTR_GUARDIAN, expected_value=DBLOCK_LDAP_UPDATE) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:243: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> attr = 'locks', expected_value = '20000' def _check_guardian_value(topology, attr=DBLOCK_ATTR_CONFIG, expected_value=None): guardian_file = topology.standalone.dbdir + '/db/guardian' > assert(os.path.exists(guardian_file)) E assert <function exists at 0x7f6f64107050>('/var/lib/dirsrv/slapd-standalone/db/db/guardian') E + where <function exists at 0x7f6f64107050> = <module 'posixpath' from '/usr/lib64/python2.7/posixpath.pyc'>.exists E + where <module 'posixpath' from '/usr/lib64/python2.7/posixpath.pyc'> = os.path <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:164: Assertio
nError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that after stop INFO:lib389:### - dse.ldif contains DBLOCK_LDAP_UPDATE INFO:lib389:### - guardian contains DBLOCK_LDAP_UPDATE INFO:lib389:### Check that edit dse+restart INFO:lib389:### - monitor contains DBLOCK_EDIT_UPDATE INFO:lib389:### - configured contains DBLOCK_EDIT_UPDATE INFO:lib389:### Check that after stop INFO:lib389:### - dse.ldif contains DBLOCK_EDIT_UPDATE INFO:lib389:### - guardian contains DBLOCK_EDIT_UPDATE INFO:lib389:### INFO:lib389:################################### ________________________ test_ticket48906_dblock_robust ________________________ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> def test_ticket48906_dblock_robust(topology): topology.standalone.log.info('###################################') topology.standalone.log.info('###') topology.standalone.log.info('### Check that the following values are rejected') topology.standalone.log.info('### - negative value') topology.standalone.log.info('### - insuffisant value') topology.standalone.log.info('### - invalid value') topology.standalone.log.info('### Check that minimum value is accepted') topology.standalone.log.info('###') topology.standalone.log.info('###################################') topology.standalone.start(timeout=10) > _check_monitored_value(topology, DBLOCK_EDIT_UPDATE) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:291: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ topology = <tickets.ticket48906_test.TopologyStandalone object at 0x7f6f5310a310> expected_value = '40000' def _check_monitored_value(topology, expected_value): entries = topology.standalone.search_s(ldbm_monitor, ldap.SCOPE_BASE, '(objectclass=*)') > assert(entries[0].hasValue(DBLOCK_ATTR_MONITOR) and entries[0].getValue(DBLOCK_ATTR_M
ONITOR) == expected_value) E assert (True and '20000' == '40000' E + where True = <bound method Entry.hasValue of dn: cn=database,cn=monitor,cn=ldbm database,cn...pd-db-txn-region-wait-rate: 0\nobjectClass: top\nobjectClass: extensibleObject\n\n>('nsslapd-db-configured-locks') E + where <bound method Entry.hasValue of dn: cn=database,cn=monitor,cn=ldbm database,cn...pd-db-txn-region-wait-rate: 0\nobjectClass: top\nobjectClass: extensibleObject\n\n> = dn: cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config\ncn: database\n...apd-db-txn-region-wait-rate: 0\nobjectClass: top\nobjectClass: extensibleObject\n\n.hasValue E - 20000 E ? ^ E + 40000 E ? ^) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/tickets/ticket48906_test.py>:144: AssertionError ----------------------------- Captured stderr call ----------------------------- INFO:lib389:################################### INFO:lib389:### INFO:lib389:### Check that the following values are rejected INFO:lib389:### - negative value INFO:lib389:### - insuffisant value INFO:lib389:### - invalid value INFO:lib389:### Check that minimum value is accepted INFO:lib389:### INFO:lib389:################################### INFO:lib389:open(): Connecting to uri ldap://localhost.localdomain:38931/ INFO:lib389:open(): bound as cn=Directory Manager ____________________________ test_range_search_init ____________________________ topology = <suites.memory_leaks.range_search_test.TopologyStandalone object at 0x7f6f4ac3b990> def test_range_search_init(topology): ''' Enable retro cl, and valgrind. Since valgrind tests move the ns-slapd binary around it's important to always "valgrind_disable" before "assert False"ing, otherwise we leave the wrong ns-slapd in place if there is a failure ''' log.info('Initializing test_range_search...') topology.standalone.plugins.enable(name=PLUGIN_RETRO_CHANGELOG) # First stop the instance topology.standalone.stop(timeout=30) # Get the sbin directory so we know where to replace 'ns-slapd' 
sbin_dir = get_sbin_dir(prefix=topology.standalone.prefix) # Enable valgrind if not topology.standalone.has_asan(): > valgrind_enable(sbin_dir) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/memory_leaks/range_search_test.py>:86: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ sbin_dir = '/usr/sbin' wrapper = '<http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/ns-slapd.valgrind'> def valgrind_enable(sbin_dir, wrapper=None): ''' Copy the valgrind ns-slapd wrapper into the /sbin directory (making a backup of the original ns-slapd binary). The script calling valgrind_enable() must be run as the 'root' user as selinux needs to be disabled for valgrind to work The server instance(s) should be stopped prior to calling this function. Then after calling valgrind_enable(): - Start the server instance(s) with a timeout of 60 (valgrind takes a while to startup) - Run the tests - Stop the server - Get the results file - Run valgrind_check_file(result_file, "pattern", "pattern", ...) - Run valgrind_disable() :param sbin_dir: the location of the ns-slapd binary (e.g. /usr/sbin) :param wrapper: The valgrind wrapper script for ns-slapd (if not set, a default wrapper is used) :raise IOError: If there is a problem setting up the valgrind scripts :raise EnvironmentError: If script is not run as 'root' ''' if os.geteuid() != 0: log.error('This script must be run as root to use valgrind') raise EnvironmentError if not wrapper: # use the default ns-slapd wrapper wrapper = '%s/%s' % (os.path.dirname(os.path.abspath(__file__)), VALGRIND_WRAPPER) nsslapd_orig = '%s/ns-slapd' % sbin_dir nsslapd_backup = '%s/ns-slapd.original' % sbin_dir if os.path.isfile(nsslapd_backup): # There is a backup which means we never cleaned up from a previous # run(failed test?) if not filecmp.cmp(nsslapd_backup, nsslapd_orig): # Files are different sizes, we assume valgrind is already setup log.info('Valgrind is alrea
dy enabled.') return # Check both nsslapd's exist if not os.path.isfile(wrapper): raise IOError('The valgrind wrapper (%s) does not exist. file=%s' % (wrapper, __file__)) if not os.path.isfile(nsslapd_orig): raise IOError('The binary (%s) does not exist or is not accessible.' % nsslapd_orig) # Make a backup of the original ns-slapd and copy the wrapper into place try: shutil.copy2(nsslapd_orig, nsslapd_backup) except IOError as e: log.fatal('valgrind_enable(): failed to backup ns-slapd, error: %s' % e.strerror) raise IOError('failed to backup ns-slapd, error: %s' % e.strerror) # Copy the valgrind wrapper into place try: shutil.copy2(wrapper, nsslapd_orig) except IOError as e: log.fatal('valgrind_enable(): failed to copy valgrind wrapper ' 'to ns-slapd, error: %s' % e.strerror) raise IOError('failed to copy valgrind wrapper to ns-slapd, error: %s' % > e.strerror) E IOError: failed to copy valgrind wrapper to ns-slapd, error: Text file busy <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/utils.py>:255: IOError ---------------------------- Captured stdout setup ----------------------------- OK group dirsrv exists OK user dirsrv exists ----------------------------- Captured stderr call ----------------------------- INFO:suites.memory_leaks.range_search_test:Initializing test_range_search... CRITICAL:lib389.utils:valgrind_enable(): failed to copy valgrind wrapper to ns-slapd, error: Text file busy ___________________________ test_multi_suffix_search ___________________________ topology = <suites.paged_results.paged_results_test.TopologyStandalone object at 0x7f6f4af61450> test_user = None, new_suffixes = None def test_multi_suffix_search(topology, test_user, new_suffixes): """Verify that page result search returns empty cookie if there is no returned entry. :Feature: Simple paged results :Setup: Standalone instance, test user for binding, two suffixes with backends, one is inserted into another, 10 users for the search base within each suffix :Steps: 1. Bind as test us
er 2. Search through all 20 added users with a simple paged control using page_size = 4 3. Wait some time logs to be updated 3. Check access log :Assert: All users should be found, the access log should contain the pr_cookie for each page request and it should be equal 0, except the last one should be equal -1 """ search_flt = r'(uid=test*)' searchreq_attrlist = ['dn', 'sn'] page_size = 4 users_num = 20 log.info('Clear the access log') topology.standalone.deleteAccessLogs() users_list_1 = add_users(topology, users_num / 2, NEW_SUFFIX_1) users_list_2 = add_users(topology, users_num / 2, NEW_SUFFIX_2) try: log.info('Set DM bind') topology.standalone.simple_bind_s(DN_DM, PASSWORD) req_ctrl = SimplePagedResultsControl(True, size=page_size, cookie='') all_results = paged_search(topology, NEW_SUFFIX_1, [req_ctrl], search_flt, searchreq_attrlist) log.info('{} results'.format(len(all_results))) assert len(all_results) == users_num log.info('Restart the server to flush the logs') topology.standalone.restart(timeout=10) access_log_lines = topology.standalone.ds_access_log.match('.*pr_cookie=.*') pr_cookie_list = ([line.rsplit('=', 1)[-1] for line in access_log_lines]) pr_cookie_list = [int(pr_cookie) for pr_cookie in pr_cookie_list] log.info('Assert that last pr_cookie == -1 and others pr_cookie == 0') pr_cookie_zeros = list(pr_cookie == 0 for pr_cookie in pr_cookie_list[0:-1]) assert all(pr_cookie_zeros) > assert pr_cookie_list[-1] == -1 E IndexError: list index out of range <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/paged_results/paged_results_test.py>:1198: IndexError ---------------------------- Captured stderr setup ----------------------------- INFO:suites.paged_results.paged_results_test:Adding suffix:o=test_parent and backend: parent_base INFO:lib389:List backend with suffix=o=test_parent INFO:lib389:Creating a local backend INFO:lib389:List backend cn=parent_base,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=parent_base,cn=
ldbm database,cn=plugins,cn=config cn: parent_base nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/parent_base nsslapd-dncachememsize: 10485760 nsslapd-readonly: off nsslapd-require-index: off nsslapd-suffix: o=test_parent objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance INFO:lib389:Entry dn: cn="o=test_parent",cn=mapping tree,cn=config cn: o=test_parent nsslapd-backend: parent_base nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree INFO:lib389:Found entry dn: cn=o\3Dtest_parent,cn=mapping tree,cn=config cn: o=test_parent nsslapd-backend: parent_base nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree INFO:suites.paged_results.paged_results_test:Adding suffix:ou=child,o=test_parent and backend: child_base INFO:lib389:List backend with suffix=ou=child,o=test_parent INFO:lib389:Creating a local backend INFO:lib389:List backend cn=child_base,cn=ldbm database,cn=plugins,cn=config INFO:lib389:Found entry dn: cn=child_base,cn=ldbm database,cn=plugins,cn=config cn: child_base nsslapd-cachememsize: 10485760 nsslapd-cachesize: -1 nsslapd-directory: /var/lib/dirsrv/slapd-standalone/db/child_base nsslapd-dncachememsize: 10485760 nsslapd-readonly: off nsslapd-require-index: off nsslapd-suffix: ou=child,o=test_parent objectClass: top objectClass: extensibleObject objectClass: nsBackendInstance INFO:lib389:Entry dn: cn="ou=child,o=test_parent",cn=mapping tree,cn=config cn: ou=child,o=test_parent nsslapd-backend: child_base nsslapd-parent-suffix: o=test_parent nsslapd-state: backend objectclass: top objectclass: extensibleObject objectclass: nsMappingTree INFO:lib389:Found entry dn: cn=ou\3Dchild\2Co\3Dtest_parent,cn=mapping tree,cn=config cn: ou=child,o=test_parent nsslapd-backend: child_base nsslapd-parent-suffix: o=test_parent nsslapd-state: backend objectClass: top objectClass: extensibleObject objectClass: nsMappingTree INFO:suites.paged_results.
paged_results_test:Adding ACI to allow our test user to search ----------------------------- Captured stderr call ----------------------------- INFO:suites.paged_results.paged_results_test:Clear the access log INFO:suites.paged_results.paged_results_test:Adding 10 users INFO:suites.paged_results.paged_results_test:Adding 10 users INFO:suites.paged_results.paged_results_test:Set DM bind INFO:suites.paged_results.paged_results_test:Running simple paged result search with - search suffix: o=test_parent; filter: (uid=test*); attr list ['dn', 'sn']; page_size = 4; controls: [<ldap.controls.libldap.SimplePagedResultsControl instance at 0x7f6f4af4aef0>]. INFO:suites.paged_results.paged_results_test:Getting page 0 INFO:suites.paged_results.paged_results_test:Getting page 1 INFO:suites.paged_results.paged_results_test:Getting page 2 INFO:suites.paged_results.paged_results_test:Getting page 3 INFO:suites.paged_results.paged_results_test:Getting page 4 INFO:suites.paged_results.paged_results_test:Getting page 5 INFO:suites.paged_results.paged_results_test:20 results INFO:suites.paged_results.paged_results_test:Restart the server to flush the logs INFO:suites.paged_results.paged_results_test:Assert that last pr_cookie == -1 and others pr_cookie == 0 INFO:suites.paged_results.paged_results_test:Remove added users INFO:suites.paged_results.paged_results_test:Deleting 10 users INFO:suites.paged_results.paged_results_test:Deleting 10 users ________________________ test_cleanallruv_stress_clean _________________________ topology = <suites.replication.cleanallruv_test.TopologyReplication object at 0x7f6f4bf19190> def test_cleanallruv_stress_clean(topology): ''' Put each server(m1 - m4) under stress, and perform the entire clean process ''' log.info('Running test_cleanallruv_stress_clean...') log.info('test_cleanallruv_stress_clean: put all the masters under load...') # Put all the masters under load m1_add_users = AddUsers(topology.master1, 2000) m1_add_users.start() m2_add_users = AddUsers(topology.master2, 2000) m2_add_users.sta
rt() m3_add_users = AddUsers(topology.master3, 2000) m3_add_users.start() m4_add_users = AddUsers(topology.master4, 2000) m4_add_users.start() # Allow sometime to get replication flowing in all directions log.info('test_cleanallruv_stress_clean: allow some time for replication to get flowing...') time.sleep(5) # Put master 4 into read only mode log.info('test_cleanallruv_stress_clean: put master 4 into read-only mode...') try: topology.master4.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, 'nsslapd-readonly', 'on')]) except ldap.LDAPError as e: log.fatal('test_cleanallruv_stress_clean: Failed to put master 4 into read-only mode: error ' + e.message['desc']) assert False # We need to wait for master 4 to push its changes out log.info('test_cleanallruv_stress_clean: allow some time for master 4 to push changes out (60 seconds)...') time.sleep(60) # Disable master 4 log.info('test_cleanallruv_stress_clean: disable replication on master 4...') try: topology.master4.replica.disableReplication(DEFAULT_SUFFIX) except: log.fatal('test_cleanallruv_stress_clean: failed to diable replication') assert False # Remove the agreements from the other masters that point to master 4 remove_master4_agmts("test_cleanallruv_stress_clean", topology) # Run the task log.info('test_cleanallruv_stress_clean: Run the cleanAllRUV task...') try: topology.master1.tasks.cleanAllRUV(suffix=DEFAULT_SUFFIX, replicaid='4', args={TASK_WAIT: True}) except ValueError as e: log.fatal('test_cleanallruv_stress_clean: Problem running cleanAllRuv task: ' + e.message('desc')) assert False # Wait for the update to finish log.info('test_cleanallruv_stress_clean: wait for all the updates to finish...') m1_add_users.join() m2_add_users.join() m3_add_users.join() m4_add_users.join() # Check the other master's RUV for 'replica 4' log.info('test_cleanallruv_stress_clean: check if all the replicas have been cleaned...') clean = check_ruvs("test_cleanallruv_stress_clean", topology) if not clean: log.fatal('test_cleanallruv_stress_clean: Failed to clean replicas') assert Fa
lse log.info('test_cleanallruv_stress_clean: PASSED, restoring master 4...') # # Cleanup - restore master 4 # # Sleep for a bit to replication complete log.info("Sleep for 120 seconds to allow replication to complete...") time.sleep(120) # Turn off readonly mode try: topology.master4.modify_s(DN_CONFIG, [(ldap.MOD_REPLACE, 'nsslapd-readonly', 'off')]) except ldap.LDAPError as e: log.fatal('test_cleanallruv_stress_clean: Failed to put master 4 into read-only mode: error ' + e.message['desc']) assert False > restore_master4(topology) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:1208: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/ds/dirsrvtests/tests/suites/replication/cleanallruv_test.py>:571: in restore_master4 topology.master2.start(timeout=30) <http://vm-058-081.abc.idm.lab.eng.brq.redhat.com:8080/job/389-DS-NIGHTLY/ws/source/lib389/lib389/__init__.py>:1096: in start "dirsrv@%s" % self.serverid]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ popenargs = (['/usr/bin/systemctl', 'start', 'dirsrv@master_2'],), kwargs = {} retcode = 1, cmd = ['/usr/bin/systemctl', 'start', 'dirsrv@master_2'] def check_call(*popenargs, **kwargs): """Run command with arguments. Wait for command to complete. If the exit code was zero then return, otherwise raise CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute. The arguments are the same as for the Popen constructor. Example: check_call(["ls", "-l"]) """ retcode = call(*popenargs, **kwargs) if retcode: cmd = kwargs.get("args") if cmd is None: cmd = popenargs[0] > raise CalledProcessError(retcode, cmd) E CalledProcessError: Command '['/usr/bin/systemctl', 'start', 'dirsrv@master_2']' returned non-zero exit status 1 /usr/lib64/python2.7/subprocess.py:541: CalledProcessError ------------
----------------- Captured stderr call ----------------------------- INFO:suites.replication.cleanallruv_test:Running test_cleanallruv_stress_clean... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put all the masters under load... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for replication to get flowing... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: put master 4 into read-only mode... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: allow some time for master 4 to push changes out (60 seconds)... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: disable replication on master 4... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: remove all the agreements to master 4... INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed INFO:lib389:Agreement (cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config) was successfully removed INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Run the cleanAllRUV task... INFO:lib389:cleanAllRUV task (task-10272016_023156) completed successfully INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: wait for all the updates to finish... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: check if all the replicas have been cleaned... INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 1 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 2 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: Master 3 is cleaned. INFO:suites.replication.cleanallruv_test:test_cleanallruv_stress_clean: PA
SSED, restoring master 4... INFO:suites.replication.cleanallruv_test:Sleep for 120 seconds to allow replication to complete... INFO:suites.replication.cleanallruv_test:Restoring master 4... INFO:lib389:List backend with suffix=dc=example,dc=com WARNING:lib389:entry cn=changelog5,cn=config already exists DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38941,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38942,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38943,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created DEBUG:suites.replication.cleanallruv_test:cn=meTo_localhost.localdomain:38944,cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config created Job for dirsrv@master_2.service failed because a fatal signal was delivered causing the control process to dump core. See "systemctl status dirsrv@master_2.service" and "journalctl -xe" for details. ============== 35 failed, 481 passed, 5 error in 8092.80 seconds ===============
+ MSG=FAILED
+ RC=1
+ sudo /usr/sbin/sendmail mreynolds@xxxxxxxxxx
+ exit 1
Build step 'Execute shell' marked build as failure
_______________________________________________
389-devel mailing list -- 389-devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-devel-leave@xxxxxxxxxxxxxxxxxxxxxxx





[Index of Archives]     [Fedora Directory Announce]     [Fedora Users]     [Older Fedora Users Mail]     [Fedora Advisory Board]     [Fedora Security]     [Fedora Devel Java]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Mentors]     [Fedora Package Review]     [Fedora Art]     [Fedora Music]     [Fedora Packaging]     [CentOS]     [Fedora SELinux]     [Big List of Linux Books]     [KDE Users]     [Fedora Art]     [Fedora Docs]

  Powered by Linux