Can you share 'ceph versions' output?
Do you see the same behaviour when adding a snapshot schedule, e.g.
rbd -p <pool> mirror snapshot schedule add 30m
I can't reproduce it, unfortunately, creating those mirror snapshots
manually still works for me.
Zitat von scott.cairns@xxxxxxxxxxxxxxxxx:
We have rbd-mirror daemon running on both sites, however replication
is only one way (i.e. the one on the remote site is the only live
one, the one on the primary site is just there for if we ever need
to set up two-way, but this is not currently set up for any
replication - so it makes sense there's nothing in the log files on
the primary site, as it's doing nothing).
I'm not seeing any errors in rbd-mirror daemon log at either end -
primary is blank as expected, and the error appears to be on the
primary when the snapshot is taken, so the remote cluster never
see's any errors.
When we either manually run the command to take a snapshot, or have
this run through cron we receive the error, e.g. running the
following on the primary site:
# rbd mirror image snapshot ceph-ssd/vm-101-disk-1
Snapshot ID: 58393
2024-08-26T12:39:54.958+0100 7b5ad6a006c0 -1
librbd::mirror::snapshot::CreatePrimaryRequest: 0x7b5ac0019e60
handle_unlink_peer: failed to unlink peer: (2) No such file or
directory
This appears in the console as the output for this (we used to only
get the Snapshot ID: xxxxx), not in any rbd log files.
Hope that clarifies it? Thanks.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx