>>> Putting it in the mgr also would allow us to do things like scheduling >>> time of day to do replacements, or maybe adjust recovery throttling >>> automatically, or whatever else we decide would improve the overall >>> process. >> >> +1 to putting features in mgr if that lets us avoid having as many different >> implementations of those features as there are orchestration tools. > > BTW I am not saying to *not* put it there, I am mostly interested in > finding what are the gaps that ceph-volume can help, like > mounting/unmounting, systemd, > dmcrypt, etc... Forgive me if I've missed something, but one advantage of using the mgr is also visibility across the cluster, notably to restrict repaving paralleism to a single OSD per host / failure domain / cluster as desired. That said, if something like this gets implemented, here's a vote to generalize it a bit to a generic repaver, not just Filestore -> Bluestore. Some of you may remember the "-n size=65536" nightmare I fought in the past. I dreamt of a daemon that would just run in the background repaving OSDs, but at the time lacked the resources to develop one that would be safe to let loose in a brittle production cluster. Or even more abstracted, a generic OSD iterator that would apply a given script / task to each OSD subject to the obvious constraints. This would seem to be too big a bite to take just to repave but I can dream ;) -- aad -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html