On Mon, Jan 23, 2017 at 11:47 PM, int32bit <krystism@xxxxxxxxx> wrote: > I'm a new comer of Ceph, I deployed two ceph cluster, and one of which is > used as mirror cluster. When I created an image, I found that the primary > image blocked in 'up+stopped' status and the non-primary image's status is > 'up+syncing`. I'm really not sure if this is in OK status and I really > couldn't find any references about sync status in docs. That is the expected behavior for the primary to be listed as "up+stopped" since it isn't syncing w/ the remote, non-primary image. The "rbd mirror pool status" command should list your health as OK -- when something is wrong it will list the health as WARNING or ERROR. < When I tried to > remove the image from primary node, I caught following error: > > # rbd --cluster server-31 rm int32bit-test/mirror-test > 2017-01-24 12:40:41.494963 7fd8dff91d80 -1 librbd: image has watchers - not > removing > Removing image: 0% complete...failed. > rbd: error: image still has watchers > This means the image is still open or the client using it crashed. Try again > after closing/unmapping it or waiting 30s for the crashed client to timeout. > > I wonder know if my mirror status is ok and how to remove mirrored image. > Is the image that you are trying to remove still bootstrapping from the primary cluster to the non-primary cluster? This is a known limitation in v10.2.3 and was resolved in v10.2.4 [1]. > My ceph version is 10.2.3, and the default rbd features is set to 125. > > Thanks for any help! [1] http://tracker.ceph.com/issues/17559 -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com