https://fedorahosted.org/389/ticket/47606 https://fedorahosted.org/389/attachment/ticket/47606/0001-Ticket-47606-replica-init-bulk-import-errors-should-.patch Description: 1. maxbersize: If the size of an entry is larger than the consumer's maxbersize, the following error used to be logged: Incoming BER Element was too long, max allowable is ### bytes. Change the nsslapd-maxbersize attribute in cn=config to increase. This message does not indicate how large the maxbersize needs to be. This patch adds the code to retrieve the failed ber size. Revised message: Incoming BER Element was @@@ bytes, max allowable is ### bytes. Change the nsslapd-maxbersize attribute in cn=config to increase. Note: There is no lber API that returns the ber size if it fails to handle the ber. This patch borrows the internal structure of ber and get the size. This could be risky since the size or structure of the ber could be updated in the openldap lber. 2. cache size: The bulk import depends upon the nsslapd-cachememsize value in the backend instance entry (e.g., cn=userRoot,cn=ldbm database,cn=plugins,cn=config). If an entry size is larger than the cachememsize, the bulk import used to fail with this message: import userRoot: REASON: entry too large (@@@ bytes) for the import buffer size (### bytes). Try increasing nsslapd- cachememsize. Also, the message follows the skipping entry message: import userRoot: WARNING: skipping entry "<DN>" but actually, it did NOT "skip" the entry and continue the bulk import, but it failed there and completely wiped out the backend database. This patch modifies the message as follows: import userRoot: REASON: entry too large (@@@ bytes) for the effective import buffer size (### bytes). Try increasing nsslapd- cachememsize for the backend instance "userRoot". and as the message mentions, it just skips the failed entry and continues the bulk import. -- 389-devel mailing list 389-devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/389-devel