Re: Data Missing with RBD-Mirror

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That is correct. On Prod we do have 22TB and on DR we only have 5.5TB

Thanks,
-Vikas

-----Original Message-----
From: Mykola Golub <to.my.trociny@xxxxxxxxx> 
Sent: Monday, February 22, 2021 10:47 AM
To: Vikas Rana <vrana@xxxxxxxxxxxx>
Cc: 'Eugen Block' <eblock@xxxxxx>; ceph-users@xxxxxxx; dillaman@xxxxxxxxxx
Subject: Re:  Re: Data Missing with RBD-Mirror

On Mon, Feb 22, 2021 at 09:41:44AM -0500, Vikas Rana wrote:
 
> # rbd journal info -p cifs --image research_data rbd journal 
> '11cb6c2ae8944a':
>         header_oid: journal.11cb6c2ae8944a
>         object_oid_prefix: journal_data.17.11cb6c2ae8944a.
>         order: 24 (16MiB objects)
>         splay_width: 4

Eh, I asked for a wrong command. Actually I wanted to see `rbd journal
status`. Anyway, I have that info in mirror status below, which looks like
up to date now.

> We restarted the rbd-mirror process on the DR side # rbd --cluster 
> cephdr mirror pool status cifs --verbose
> health: OK
> images: 1 total
>     1 replaying
> 
> research_data:
>   global_id:   69656449-61b8-446e-8b1e-6cf9bd57d94a
>   state:       up+replaying
>   description: replaying, master_position=[object_number=396351, 
> tag_tid=4, entry_tid=455084955], 
> mirror_position=[object_number=396351, tag_tid=4, entry_tid=455084955],
entries_behind_master=0
>   last_update: 2021-02-19 15:36:30

And I suppose, after creating and replaying a snapshot, you still see files
missing on the secondary after mounting it?

--
Mykola Golub
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux