Re: dashboard->orchestrator interface

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 27, 2018 at 7:38 AM, Ricardo Dias <rdias@xxxxxxxx> wrote:
> Hi,
>
> Thanks for the summary/plan that resulted from last week's meetings.
> Some comments/questions below:
>
> On 26-03-2018 15:35, John Spray wrote:
>>
>> Hi folks,
>>
>>
>> We discussed having a common set of functions, with implementations
>> living inside ceph-mgr as some extra per-backend python code.  The
>> underlying orchestration platform would remain the source of truth for
>> information about available hardware and what services are running
>> where: the new mgr code is just "glue".
>
>
> Sounds like the best way to go. Similar to what we discussed a few weeks ago
> in another email thread.
>
>>
>> The key list of capabilities from the whiteboard last Wednesday were:
>> 1. Getting an inventory from the orchestrator (list of nodes+drives)
>> 2. Add/remove of stateful services (osds, mons) targeting particular
>> nodes+drives
>> 3. Add/remove of stateless services, leaving node location up to
>> orchestrator
>
>
> Each of the stateless services (and might also apply to future stateful
> services) has different ways to configure. From an implementation
> perspective we need to write specialized code (within the orchestration
> module) to handle each of the stateless service. This means that maybe we
> should also add the capability of listing the types of services supported by
> the orchestrator.
>
>> 4. Requesting orchestrator to upgrade the Ceph cluster
>> 5. Getting status from orchestrator about currently running daemons
>>
>> I've written some draft python classes in this PR to serve as a basis
>> for continued discussion
>> (https://github.com/ceph/ceph/pull/21046/files) -- please add your
>> comments.
>>
>>
>> When it comes to doing a real implementation of this interface, I'm
>> thinking that each of the orchestrator backends should be its own
>> ceph-mgr module -- that way, they have access to the mgr module
>> configuration and persistence hooks and we don't have to re-invent any
>> of that for orchestrator modules specifically.  The dashboard module
>> can then look at which module is enabled to work out which one to call
>> through to (although perhaps some extra protection is needed to avoid
>> someone enabling more than one).  This all relies on the forthcoming
>> ability for modules to call out to one another, of course.
>
>
> Was the ability for modules to call another modules already discussed
> somewhere? or is there already an implementation draft?

Draft implementation of inter-module calls looks like this:
https://github.com/jcsp/ceph/commit/486037ab2efcf5f1a1f299a2dc6e9fd4299225db

John

>
>>
>> Cheers,
>> John
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> Ricardo Dias
> Senior Software Engineer - Storage Team
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284
> (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux