Hi William,
If you delete an entry, the delete operation will indeed be replayed but you must be aware that for replicated operation, that is the target entry nsuniqueid that matters rather than the dn,
You should double check that the good entry have a different nsuniqueid than the bad one (it should be the same than the conflict entry), if that is the case deleting the bad entry should not remove the good ones:
Either it will delete a conflict entry associated with the bad entry (if it exists) or it will fail to replay that change.
Regards,
Pierre
If you delete an entry, the delete operation will indeed be replayed but you must be aware that for replicated operation, that is the target entry nsuniqueid that matters rather than the dn,
You should double check that the good entry have a different nsuniqueid than the bad one (it should be the same than the conflict entry), if that is the case deleting the bad entry should not remove the good ones:
Either it will delete a conflict entry associated with the bad entry (if it exists) or it will fail to replay that change.
Regards,
Pierre
On Thu, Jan 11, 2024 at 7:39 PM William Faulk <d4hgcdgdmj@xxxxxxxxxxxxxx> wrote:
I have an IdM/freeipa installation with around 30 replicas. I have an entry for a computer that exist across all of those replicas. However, one of the replicas has incorrect data in the DN, with the correct data found in a conflict entry. (It appears that that entry was created on that replica, somehow didn't get replicated anywhere else, and then the entry was created again on a different replica.)
I would like to resolve this naming conflict. The documentation (RHDS 10 Admin Guide, §15.26.1) states that the correct way to "promote" a conflict entry to the active entry is to first delete the active entry and then rename the conflict entry. (I'm running an old version of IdM that uses a 389-ds that doesn't include the dsconf utility.)
But it seems to me that if I send a delete operation to the replica with the bad data, it's just going to replicate that delete operation to all the other replicas, deleting the correct data from all the other replicas, which seems like an awfully dramatic action to take. To reiterate, the correct data exists on all of the other replicas in an entry with the same DN as the entry with the bad data on the "bad" replica.
I have tried to recreate this situation with a new DN that doesn't reference active systems, but I have been unsuccessful.
Can someone confirm that deleting the bad entry from the bad replica will cause the good entries on all the good replicas to also be deleted? If so, is there a better way to resolve this conflict? (At the moment, I'm inclined to just reinitialize the data on this one replica.)
--
_______________________________________________
389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue
--
389 Directory Server Development Team
389 Directory Server Development Team
-- _______________________________________________ 389-users mailing list -- 389-users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/389-users@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue