Re: Ceph 'brag' Manager Module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 22, 2018 at 11:51 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
> Hi,
>
> Recently I've been working on the Ceph 'brag' module for the Manager [0].
>
> When enabled (opt-in) this module will send metadata about the Ceph
> cluster back to the project.
>
> It compiles a JSON document which contains:
>
> - fsid
> - Creation data
> - Information about pools
> - Information about Placement Groups and data
>
> (A example JSON is added at the bottom of this e-mail).
>
> On the server side a Flask [1] application receives the reports and
> stores them in ElasticSearch.
>
> For my testing environment I've set up my own server where data is send
> to and stored on:
>
> - http://brag.widodh.nl/
> -
> http://brag.widodh.nl:9200/brag/report/d40e7248-ef94-438c-a771-f40c34e2e2ba
>
> I'm only gathering data which I can easily obtain in the Manager about
> the Ceph cluster.
>
> As a project we mainly want to know:
>
> - What version of Ceph do people use?
> - How many pools?
> - What daemons and how many?
> - What OS and kernels?
>
> The module will send this data every 72 hours back to
> brag.ceph.com/report and it will be stored there.
>
> Using the fsid we can figure out how clusters change over time and how
> they are growing (or not).
>
> Users can add a 'description' and 'email' to their cluster if they want
> to so that we can find out more about the system.
>
> The 'public' flag controls if a user wants this data to be public or
> not. This part still has to be written, but I imagine a website on
> brag.ceph.com where you can 'brag' about your Ceph cluster and show it
> off to the rest of the world.
>
> Right now the aim is to collect data (opt-in!) and use that to improve
> the project.
>
> Questions which still need to be answered are:
>
> - Who hosts brag.ceph.com?
> - Who has access to the data on brag.ceph.com?
>
> For now I'd like to get feedback on the idea and the module and see
> where it can be improved.
>
> Feedback, suggestions, flames are welcome!

Very cool!  I've left some comments on the commit.

I anticipate wanting various per-subsystem additions to this over time
for things like flagging which cephfs features are enabled.  We could
also use the pool tags to report whether a system is using RGW/RBD.

John

>
> Wido
>
> [0]: https://github.com/wido/ceph/tree/mgr-brag
> [1]: http://flask.pocoo.org/
>
> {
>   "fs": {
>     "count": 1
>   },
>   "description": "My test cluster",
>   "created": "2018-02-26T10:01:27.790360",
>   "osd": {
>     "count": 3,
>     "require_min_compat_client": "jewel",
>     "require_osd_release": "luminous"
>   },
>   "usage": {
>     "total_objects": 208,
>     "total_used_bytes": 3428843520,
>     "pg_num:": 160,
>     "total_bytes": 32199671808,
>     "pools": 6,
>     "total_avail_bytes": 28770828288
>   },
>   "contact": null,
>   "mon": {
>     "count": 3,
>     "features": {
>       "optional": [],
>       "persistent": [
>         "kraken",
>         "luminous"
>       ]
>     }
>   },
>   "pools": [
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 8,
>       "pgp_num": 8,
>       "type": 1,
>       "pool": 1,
>       "size": 3
>     },
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 8,
>       "pgp_num": 8,
>       "type": 1,
>       "pool": 2,
>       "size": 3
>     },
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 8,
>       "pgp_num": 8,
>       "type": 1,
>       "pool": 3,
>       "size": 3
>     },
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 8,
>       "pgp_num": 8,
>       "type": 1,
>       "pool": 4,
>       "size": 3
>     },
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 64,
>       "pgp_num": 64,
>       "type": 1,
>       "pool": 5,
>       "size": 3
>     },
>     {
>       "crush_rule": 0,
>       "min_size": 2,
>       "pg_num": 64,
>       "pgp_num": 64,
>       "type": 1,
>       "pool": 6,
>       "size": 3
>     }
>   ],
>   "organization": null,
>   "public": false,
>   "fsid": "d40e7248-ef94-438c-a771-f40c34e2e2ba",
>   "metadata": {
>     "osd": {
>       "distro_description": {
>         "Ubuntu 16.04.3 LTS": 3
>       },
>       "rotational": {
>         "0": 3
>       },
>       "kernel_version": {
>         "4.13.0-36-generic": 3
>       },
>       "arch": {
>         "x86_64": 3
>       },
>       "cpu": {
>         "Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz": 3
>       },
>       "osd_objectstore": {
>         "bluestore": 3
>       },
>       "kernel_description": {
>         "#40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018": 3
>       },
>       "os": {
>         "Linux": 3
>       },
>       "ceph_version": {
>         "ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949)
> luminous (stable)": 3
>       },
>       "distro": {
>         "ubuntu": 3
>       }
>     },
>     "mon": {
>       "distro_description": {
>         "Ubuntu 16.04.3 LTS": 3
>       },
>       "kernel_version": {
>         "4.13.0-36-generic": 3
>       },
>       "arch": {
>         "x86_64": 3
>       },
>       "cpu": {
>         "Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz": 3
>       },
>       "kernel_description": {
>         "#40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018": 3
>       },
>       "os": {
>         "Linux": 3
>       },
>       "ceph_version": {
>         "ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949)
> luminous (stable)": 3
>       },
>       "distro": {
>         "ubuntu": 3
>       }
>     }
>   }
> }
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux