Re: dashboard in mimic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(broad topic :-), trimming back to my immediate comment/question)

On 12/20/2017 09:55 PM, John Spray wrote:
> What?
> =====
> 
> Extend the dashboard module to provide management of the cluster, in
> addition to monitoring.  This would potentially include anything you
> can currently do with the Ceph CLI, plus additional functionality like
> calling out to a container framework to spawn additional daemons.
> 
> The idea is to wrap things up into friendlier higher-level operations,
> rather than just having buttons for the existing CLI operations.
> Example workflows of interest:
>  - a CephFS page where you can click "New Filesystem", and the pools
> and MDS daemons will all be created for you.
>  - similarly for RGW: ability to enable RGW and control the number of
> gateway daemons
>  - driving OSD additional/retirement, and also format conversions
> (e.g. filestore->bluestore)
> 
> Some of the functionality would depend on how Ceph is being run:
> especially, anything that detects devices and starts/stops physical
> services would depend on an environment that provides that (such as
> Kubenetes).

Any configuration/management of things that ceph already knows about is
"easy" to implement (creating pools, rbd volumes, cluster config, etc.)

For spawning/configuring additional daemons, is it worth considering
some kind of thin layer (another mgr module or modules?) that let the
admin choose whether this is done by k8s, salt, ansible, whatever?

Regards,

Tim
-- 
Tim Serong
Senior Clustering Engineer
SUSE
tserong@xxxxxxxx
null

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux