Re: CEPH DR RBD Mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FYI -- that "entries_behind_master=175226727" bit is telling you that
it has only mirrored about 80% of the recent changes from primary to
non-primary.

Was the filesystem already in place? Are their any partitions/LVM
volumes in-use on the device? Did you map the volume read-only?
On Tue, Nov 27, 2018 at 8:49 AM Vikas Rana <vikasrana3@xxxxxxxxx> wrote:
>
> Hi There,
>
> We are replicating a 100TB RBD image to DR site. Replication works fine.
>
> rbd --cluster cephdr mirror pool status nfs --verbose
>
> health: OK
>
> images: 1 total
>
>     1 replaying
>
>
>
> dir_research:
>
>   global_id:   11e9cbb9-ce83-4e5e-a7fb-472af866ca2d
>
>   state:       up+replaying
>
>   description: replaying, master_position=[object_number=591701, tag_tid=1, entry_tid=902879873], mirror_position=[object_number=446354, tag_tid=1, entry_tid=727653146], entries_behind_master=175226727
>
>   last_update: 2018-11-14 16:17:23
>
>
>
>
> We then, use nbd to map the RBD image at the DR site but when we try to mount it, we get
>
>
> # mount /dev/nbd2 /mnt
>
> mount: block device /dev/nbd2 is write-protected, mounting read-only
>
> mount: /dev/nbd2: can't read superblock
>
>
>
> We are using 12.2.8.
>
>
> Any help will be greatly appreciated.
>
>
> Thanks,
>
> -Vikas
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux