On 10-10-2017 14:21, Alfredo Deza wrote: > On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagen <wjw@xxxxxxxxxxx> wrote: >> On 10-10-2017 13:51, Alfredo Deza wrote: >>> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <chibi@xxxxxxx> wrote: >>>> >>>> Hello, >>>> >>>> (pet peeve alert) >>>> On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote: >>>> >>>>> To put this in context, the goal here is to kill ceph-disk in mimic. >> >> Right, that means we need a ceph-volume zfs before things get shot down. >> Fortunately there is little history to carry over. >> >> But then still somebody needs to do the work. ;-| >> Haven't looked at ceph-volume, but I'll put it on the agenda. > > An interesting take on zfs (and anything else we didn't set up from > the get-go) is that we envisioned developers might > want to craft plugins for ceph-volume and expand its capabilities, > without placing the burden of coming up > with new device technology to support. > > The other nice aspect of this is that a plugin would get to re-use all > the tooling in place in ceph-volume. The plugin architecture > exists but it isn't fully developed/documented yet. I was part of the original discussion when ceph-volume said it was going to be plugable... And would be a great proponent of thye plugins. If only because ceph-disk is rather convoluted to add to. Not that it cannot be done, but the code is rather loaded with linuxisms for its devices. And it takes some care to not upset the old code, even to the point that code for a routine is refactored into 3 new routines: one OS selctor and then the old code for Linux, and the new code for FreeBSD. And that starts to look like a poor mans plugin. :) But still I need to find the time, and sharpen my python skills. Luckily mimic is 9 months away. :) --WjW _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com