Re: How to recover/mount mirrored rbd image for file recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry, to clarify, you also need to restrict the clients to mimic or
later to use RBD clone v2 in the default "auto" version selection
mode:

$ ceph osd set-require-min-compat-client mimic

Ah, of course, thanks for the clarification.


Zitat von Jason Dillaman <jdillama@xxxxxxxxxx>:

On Mon, Mar 23, 2020 at 5:02 AM Eugen Block <eblock@xxxxxx> wrote:

> To be able to mount the mirrored rbd image (without a protected snapshot):
>   rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
> --cluster backup
>
> I just need to upgrade my backup cluster?

No, that only works with snapshots. Although I'm not sure if you can
really skip the protection. I have two Octopus lab clusters and this
procedure only works with a protected snapshot. If I try to clone the
unprotected snapshot in the remote cluster I get an error:

remote:~ # rbd clone pool1/image1@snap1 pool2/image1
2020-03-23T09:55:54.813+0100 7f7cc6ffd700 -1
librbd::image::CloneRequest: 0x558af5fb9f00 validate_parent: parent
snapshot must be protected
rbd: clone error: (22) Invalid argument

Sorry, to clarify, you also need to restrict the clients to mimic or
later to use RBD clone v2 in the default "auto" version selection
mode:

$ ceph osd set-require-min-compat-client mimic



Zitat von Ml Ml <mliebherr99@xxxxxxxxxxxxxx>:

> okay, so i have ceph version 14.2.6 nautilus on my source cluster and
> ceph version 12.2.13 luminous on my backup clouster.
>
> To be able to mount the mirrored rbd image (without a protected snapshot):
>   rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
> --cluster backup
>
> I just need to upgrade my backup cluster?
>
>
> Thanks,
> Michael
>
> On Thu, Mar 19, 2020 at 1:06 PM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>>
>> On Thu, Mar 19, 2020 at 6:19 AM Eugen Block <eblock@xxxxxx> wrote:
>> >
>> > Hi,
>> >
>> > one workaround would be to create a protected snapshot on the primary
>> > image which is also mirrored, and then clone that snapshot on the
>> > remote site. That clone can be accessed as required.
>>
>> +1. This is the correct approach. If you are using a Mimic+ cluster
>> (i.e. require OSD release >= Mimic), you can use skip the protect
>> step.
>>
>> > I'm not sure if there's a way to directly access the remote image
>> > since it's read-only.
>> >
>> > Regards,
>> > Eugen
>> >
>> >
>> > Zitat von Ml Ml <mliebherr99@xxxxxxxxxxxxxx>:
>> >
>> > > Hello,
>> > >
>> > > my goal is to back up a proxmox cluster with rbd-mirror for desaster
>> > > recovery. Promoting/Demoting, etc.. works great.
>> > >
>> > > But how can i access a single file on the mirrored cluster? I tried:
>> > >
>> > >    root@ceph01:~# rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
>> > > --cluster backup
>> > >    /dev/nbd1
>> > >
>> > > But i get:
>> > >    root@ceph01:~# fdisk -l /dev/nbd1
>> > >    fdisk: cannot open /dev/nbd1: Input/output error
>> > >
>> > > dmesg shows stuff like:
>> > >    [Thu Mar 19 09:29:55 2020]  nbd1: unable to read partition table
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30) >> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > >
>> > > Here is my state:
>> > >
>> > > root@ceph01:~# rbd --cluster backup mirror pool status
>> cluster5-rbd --verbose
>> > > health: OK
>> > > images: 3 total
>> > >     3 replaying
>> > >
>> > > vm-106-disk-0:
>> > >   global_id:   0bc18ee1-1749-4787-a45d-01c7e946ff06
>> > >   state:       up+replaying
>> > > description: replaying, master_position=[object_number=3, tag_tid=2,
>> > > entry_tid=3], mirror_position=[object_number=3, tag_tid=2,
>> > > entry_tid=3], entries_behind_master=0
>> > >   last_update: 2020-03-19 09:29:17
>> > >
>> > > vm-114-disk-1:
>> > >   global_id:   2219ffa9-a4e0-4f89-b352-ff30b1ffe9b9
>> > >   state:       up+replaying
>> > >   description: replaying, master_position=[object_number=390,
>> > > tag_tid=6, entry_tid=334290], mirror_position=[object_number=382,
>> > > tag_tid=6, entry_tid=328526], entries_behind_master=5764
>> > >   last_update: 2020-03-19 09:29:17
>> > >
>> > > vm-115-disk-0:
>> > >   global_id:   2b0af493-14c1-4b10-b557-84928dc37dd1
>> > >   state:       up+replaying
>> > >   description: replaying, master_position=[object_number=72,
>> > > tag_tid=1, entry_tid=67796], mirror_position=[object_number=72,
>> > > tag_tid=1, entry_tid=67796], entries_behind_master=0
>> > >   last_update: 2020-03-19 09:29:17
>> > >
>> > > More dmesg stuff:
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: 95 callbacks suppressed
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 0
>> > > [Thu Mar 19 09:30:02 2020] buffer_io_error: 94 callbacks suppressed
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 0, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 1
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 1, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 2
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 2, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 3
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 3, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 4
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 4, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 5
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 5, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 6
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 6, async page read
>> > > [Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
>> > > [Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev
>> nbd1, sector 7
>> > > [Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
>> > > 7, async page read
>> > >
>> > > Do i have to stop the replaying or how can i mount the image on the
>> > > backup cluster?
>> > >
>> > > Thanks,
>> > > Michael
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@xxxxxxx
>> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>> >
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>>
>>
>> --
>> Jason
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
Jason


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux