Re: Orchestrator interface/Rook update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John, this is incredible.

I look forward to trying this out. =)

One thing I'd like to see in a more final version is batching the osd operations so they can be queued and all run at once. I think the ideal workflow would be to try to do all the OSDs at once, which I'd hope would be closer to an O(1) process than what is currently a serial O(n) process. At any rate, I think we want to make sure our API designs plan for this future.

Blaine

On 29/05/18 15:47, John Spray wrote:
Hi all,

Here's a fun demo of current code running on top of Rook!
https://imgur.com/a/1iOUR6o

Before I disappear on vacation for two weeks, so I wanted to give a
quick update on the orchestrator interface stuff (see previous thread
"dashboard->orchestrator interface").

There's now a simple CLI for testing this stuff out (module
"orchestrator_cli").  The CLI is not fancy -- this stuff is primarily
intended to be called from the GUI eventually.  That demo is running
on a cluster that was bootstrapped using Rook to set up mon and mgr
only.  There's a little bit of hollywood magic in the recording,
skipping time between creating the service and seeing it pop up in
"ceph status" -- in reality that's about 60 seconds for an OSD and
about 20 seconds for an MDS.

The code is here:
https://github.com/jcsp/rook/pull/1
https://github.com/jcsp/ceph/pull/7

In case anyone wants to try running this, you also to create a special
RBAC config for your mgr daemon in kubernetes, so that it can do stuff
with the kubernetes API, like this: https://pastebin.com/Y5eyjPDN

Next priorities for me are:
  - Improve completions to be a two-phase thing: first report when
change is persistent (i.e. updated CRD or equivalent manifests in
other orchestrators), second report when the change has taken effect
(e.g. a created OSD is actually running).
  - Implement the "stateful service rule" operators (i.e. the ones that
leave OSD device choice up to the orchestrator).  So far we had been
thinking of those as operating at a node level of granularity, but
Sage has persuasively suggested that they should also allow specifying
a list of devices, so that users can get the ceph-volume
auto-configuration within a particular subset while leaving other
devices alone.
  - Implement OSD removal, and make sure it works in such a way that
the primitives can be used to build things like filestore->bluestore
migration.

This is starting to get to the point where it works well enough to get
a rough and ready GUI on top of it, if anyone is up for that!  It
would also be interesting to try mapping the as-yet-unimplemented
add_stateful_service_rule calls to DeepSea's concept of roles.

Cheers,
John
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux