Re: RBD-Mirror Snapshot Backup Image Uses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 21, 2021 at 9:40 AM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
>
> After the resync finished.  I can mount it now.
>
> root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1 CephTestPool1/vm-100-disk-0-CLONE
> root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin --keyring /etc/ceph/ceph.client.admin.keyring
> /dev/nbd0
> root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
>
> Makes me a bit nervous how it got into that position and everything appeared ok.

We unfortunately need to create the snapshots that are being synced as
a first step, but perhaps there are some extra guardrails we can put
on the system to prevent premature usage if the sync status doesn't
indicate that it's complete.

> ________________________________
> From: "Jason Dillaman" <jdillama@xxxxxxxxxx>
> To: "adamb" <adamb@xxxxxxxxxx>
> Cc: "Eugen Block" <eblock@xxxxxx>, "ceph-users" <ceph-users@xxxxxxx>, "Matt Wilder" <matt.wilder@xxxxxxxxxx>
> Sent: Thursday, January 21, 2021 9:25:11 AM
> Subject: Re:  Re: RBD-Mirror Snapshot Backup Image Uses
>
> On Thu, Jan 21, 2021 at 8:34 AM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
> >
> > When cloning the snapshot on the remote cluster I can't see my ext4 filesystem.
> >
> > Using the same exact snapshot on both sides.  Shouldn't this be consistent?
>
> Yes. Has the replication process completed ("rbd mirror image status
> CephTestPool1/vm-100-disk-0")?
>
> > Primary Site
> > root@Ccscephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep TestSnapper1
> >  10621  TestSnapper1                                                                               2 TiB             Thu Jan 21 08:15:22 2021  user
> >
> > root@Ccscephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1 CephTestPool1/vm-100-disk-0-CLONE
> > root@Ccscephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin --keyring /etc/ceph/ceph.client.admin.keyring
> > /dev/nbd0
> > root@Ccscephtest1:~# mount /dev/nbd0 /usr2
> >
> > Secondary Site
> > root@Bunkcephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep TestSnapper1
> >  10430  TestSnapper1                                                                                   2 TiB             Thu Jan 21 08:20:08 2021  user
> >
> > root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1 CephTestPool1/vm-100-disk-0-CLONE
> > root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id admin --keyring /etc/ceph/ceph.client.admin.keyring
> > /dev/nbd0
> > root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
> > mount: /usr2: wrong fs type, bad option, bad superblock on /dev/nbd0, missing codepage or helper program, or other error.
> >
> >
> >
> > ________________________________
> > From: "adamb" <adamb@xxxxxxxxxx>
> > To: "dillaman" <dillaman@xxxxxxxxxx>
> > Cc: "Eugen Block" <eblock@xxxxxx>, "ceph-users" <ceph-users@xxxxxxx>, "Matt Wilder" <matt.wilder@xxxxxxxxxx>
> > Sent: Wednesday, January 20, 2021 3:42:46 PM
> > Subject: Re:  Re: RBD-Mirror Snapshot Backup Image Uses
> >
> > Awesome information.  I new I had to be missing something.
> >
> > All of my clients will be far newer than mimic so I don't think that will be an issue.
> >
> > Added the following to my ceph.conf on both clusters.
> >
> > rbd_default_clone_format = 2
> >
> > root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1 CephTestPool2/vm-100-disk-0-CLONE
> > root@Bunkcephmon2:~# rbd ls CephTestPool2
> > vm-100-disk-0-CLONE
> >
> > I am sure I will be back with more questions.  Hoping to replace our Nimble storage with Ceph and NVMe.
> >
> > Appreciate it!
> >
> > ________________________________
> > From: "Jason Dillaman" <jdillama@xxxxxxxxxx>
> > To: "adamb" <adamb@xxxxxxxxxx>
> > Cc: "Eugen Block" <eblock@xxxxxx>, "ceph-users" <ceph-users@xxxxxxx>, "Matt Wilder" <matt.wilder@xxxxxxxxxx>
> > Sent: Wednesday, January 20, 2021 3:28:39 PM
> > Subject: Re:  Re: RBD-Mirror Snapshot Backup Image Uses
> >
> > On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan <adamb@xxxxxxxxxx> wrote:
> > >
> > > That's what I though as well, specially based on this.
> > >
> > >
> > >
> > > Note
> > >
> > > You may clone a snapshot from one pool to an image in another pool. For example, you may maintain read-only images and snapshots as templates in one pool, and writeable clones in another pool.
> > >
> > > root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1 CephTestPool2/vm-100-disk-0-CLONE
> > > 2021-01-20T15:06:35.854-0500 7fb889ffb700 -1 librbd::image::CloneRequest: 0x55c7cf8417f0 validate_parent: parent snapshot must be protected
> > >
> > > root@Bunkcephmon2:~# rbd snap protect CephTestPool1/vm-100-disk-0@TestSnapper1
> > > rbd: protecting snap failed: (30) Read-only file system
> >
> > You have two options: (1) protect the snapshot on the primary image so
> > that the protection status replicates or (2) utilize RBD clone v2
> > which doesn't require protection but does require Mimic or later
> > clients [1].
> >
> > >
> > > From: "Eugen Block" <eblock@xxxxxx>
> > > To: "adamb" <adamb@xxxxxxxxxx>
> > > Cc: "ceph-users" <ceph-users@xxxxxxx>, "Matt Wilder" <matt.wilder@xxxxxxxxxx>
> > > Sent: Wednesday, January 20, 2021 3:00:54 PM
> > > Subject: Re:  Re: RBD-Mirror Snapshot Backup Image Uses
> > >
> > > But you should be able to clone the mirrored snapshot on the remote
> > > cluster even though it’s not protected, IIRC.
> > >
> > >
> > > Zitat von Adam Boyhan <adamb@xxxxxxxxxx>:
> > >
> > > > Two separate 4 node clusters with 10 OSD's in each node. Micron 9300
> > > > NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
> > > > white papers.
> > > >
> > > > When I attempt to protect the snapshot on a remote image, it errors
> > > > with read only.
> > > >
> > > > root@Bunkcephmon2:~# rbd snap protect
> > > > CephTestPool1/vm-100-disk-0@TestSnapper1
> > > > rbd: protecting snap failed: (30) Read-only file system
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> > [1] https://ceph.io/community/new-mimic-simplified-rbd-image-cloning/
> >
> > --
> > Jason
> >
>
>
> --
> Jason



-- 
Jason
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux