Re: pool/volume live migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is indeed for an OpenStack cloud - it didn't require any level of
performance (so was created on an EC pool) and now it does :(

So the idea would be:
1- create a new pool
2- change cinder to use the new pool

for each volume
  3- stop the usage of the volume (stop the instance?)
  4- "live migrate" the volume to the new pool
  5- start up the instance again


Does that sound right?

thanks,

On Fri, Feb 8, 2019 at 4:25 PM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
>
> Correction: at least for the initial version of live-migration, you
> need to temporarily stop clients that are using the image, execute
> "rbd migration prepare", and then restart the clients against the new
> destination image. The "prepare" step will fail if it detects that the
> source image is in-use.
>
> On Fri, Feb 8, 2019 at 9:00 AM Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
> >
> > Indeed, it is forthcoming in the Nautilus release.
> >
> > You would initiate a "rbd migration prepare <src-image-spec>
> > <dst-image-spec>" to transparently link the dst-image-spec to the
> > src-image-spec. Any active Nautilus clients against the image will
> > then re-open the dst-image-spec for all IO operations. Read requests
> > that cannot be fulfilled by the new dst-image-spec will be forwarded
> > to the original src-image-spec (similar to how parent/child cloning
> > behaves). Write requests to the dst-image-spec will force a deep-copy
> > of all impacted src-image-spec backing data objects (including
> > snapshot history) to the associated dst-image-spec backing data
> > object.  At any point a storage admin can run "rbd migration execute"
> > to deep-copy all src-image-spec data blocks to the dst-image-spec.
> > Once the migration is complete, you would just run "rbd migration
> > commit" to remove src-image-spec.
> >
> > Note: at some point prior to "rbd migration commit", you will need to
> > take minimal downtime to switch OpenStack volume registration from the
> > old image to the new image if you are changing pools.
> >
> > On Fri, Feb 8, 2019 at 5:33 AM Caspar Smit <casparsmit@xxxxxxxxxxx> wrote:
> > >
> > > Hi Luis,
> > >
> > > According to slide 21 of Sage's presentation at FOSDEM it is coming in Nautilus:
> > >
> > > https://fosdem.org/2019/schedule/event/ceph_project_status_update/attachments/slides/3251/export/events/attachments/ceph_project_status_update/slides/3251/ceph_new_in_nautilus.pdf
> > >
> > > Kind regards,
> > > Caspar
> > >
> > > Op vr 8 feb. 2019 om 11:24 schreef Luis Periquito <periquito@xxxxxxxxx>:
> > >>
> > >> Hi,
> > >>
> > >> a recurring topic is live migration and pool type change (moving from
> > >> EC to replicated or vice versa).
> > >>
> > >> When I went to the OpenStack open infrastructure (aka summit) Sage
> > >> mentioned about support of live migration of volumes (and as a result
> > >> of pools) in Nautilus. Is this still the case and is expected to have
> > >> live migration working by then?
> > >>
> > >> thanks,
> > >> _______________________________________________
> > >> ceph-users mailing list
> > >> ceph-users@xxxxxxxxxxxxxx
> > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@xxxxxxxxxxxxxx
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Jason
>
>
>
> --
> Jason
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux