Cumulative followup to various insightful replies. I wrote: >>>> No, it's not really possible currently and we have no plans to add >>> such support since it would not be of any long-term value. >> >> The long-term value would be the ability to migrate volumes from, say, a replicated pool to an an EC pool without extended downtime. Among the replies: > That's why the Mimic release should offer a specific set of "rbd > migration XYX" actions to perform minimal downtime migrations. That'd be awesome, if it can properly handle client-side attachments. > I was saying that hacking around rbd-mirror to add such a feature has > limited long-term value given the release plan. Plus, in the immediate > term you can use a kernel md RAID device or QEMU+librbd to perform the > migration for you w/ minimal downtime (albeit with potentially more > hands-on setup involved). Great ideas if one controls both the back end and the guest OS. In our case we can't go mucking around inside the guests. > if you use qemu, it's also possible to use drive-mirror feature from qemu. > (can mirror and migrate from 1 storage to another storage without downtime). Interesting idea, and a qemu function I was not previously aware of. Has some potential wrinkles though: o Considerably more network traffic and I/O load to/from the hypervisor (or whatever you call the hosts where your guest VMs run) o Scaling to thousands of volumes each potentially TBs in size o Handling unattached volumes o Co-ordination with / prevention of user attach/detach operations during the process -- aad _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com