Re: Ceph 'brag' Manager Module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 03/22/2018 11:41 PM, Sage Weil wrote:
> \o/
> 
> I have a couple small suggestions:
> 
> 1- Instead of including the actual fsid, generate a new uuid teh first 
> time brag runs and stick it in config-key (e.g., brag_uuid).  That will 
> still uniquely identify the cluster but not in a way that will be easy to 
> map back to an actual cluster (e.g., if someone pastes ceph -s to 
> ceph-users and includes the uuid).
> 

Good one! I've added that, still need to test it all though.

I used 'report_id' as a name and added that to the report.

> 2- Sometimes there is identifying information in teh version strings, like 
> the kernel description.  Perhaps we should sanitize it to only include the 
> version number portion?
> 

How? I'm just fetching the metadata as provided by the MON/daemons. I
can start to parse and strip it, but if something changes in the
metadata that breaks again.

We might want to return the ceph version in the metadata as a 'number'
only, or something like:

{
  "ceph_version_major": 12,
  "ceph_version_minor": 2,
  "ceph_version_patch": 4
}

That could be easily included.

> 3- For the ceph version as well, I think we only care about the base 
> version and whether or not there is anything on top.  So instead of 'ceph 
> version 12.2.4-gabcde (sha1)' it could be 'ceph version 12.2.4-*'.
> 

Yes, see my comment above.

> 4- Perhaps include summary of other daemon types? I don't see the mds 
> count below, and a count of other items in teh servicemap (rgw, 
> rbd-mirror) would be interesting.
> 

If I'm able to fetch that through the mgr I will. Need to look into that.

The updated code is here: https://github.com/wido/ceph/commits/mgr-brag

Not opening a PR yet as I still have more work to do on the server-side.

I'm not a true GUI expert, so somebody else will hopefully step in and
make something so that we can fetch the data out of ElasticSearch.

How are we going to do governance of this data? The board?

Wido

> sage
> 
> 
> On Thu, 22 Mar 2018, Wido den Hollander wrote:
> 
>> Hi,
>>
>> Recently I've been working on the Ceph 'brag' module for the Manager [0].
>>
>> When enabled (opt-in) this module will send metadata about the Ceph
>> cluster back to the project.
>>
>> It compiles a JSON document which contains:
>>
>> - fsid
>> - Creation data
>> - Information about pools
>> - Information about Placement Groups and data
>>
>> (A example JSON is added at the bottom of this e-mail).
>>
>> On the server side a Flask [1] application receives the reports and
>> stores them in ElasticSearch.
>>
>> For my testing environment I've set up my own server where data is send
>> to and stored on:
>>
>> - http://brag.widodh.nl/
>> -
>> http://brag.widodh.nl:9200/brag/report/d40e7248-ef94-438c-a771-f40c34e2e2ba
>>
>> I'm only gathering data which I can easily obtain in the Manager about
>> the Ceph cluster.
>>
>> As a project we mainly want to know:
>>
>> - What version of Ceph do people use?
>> - How many pools?
>> - What daemons and how many?
>> - What OS and kernels?
>>
>> The module will send this data every 72 hours back to
>> brag.ceph.com/report and it will be stored there.
>>
>> Using the fsid we can figure out how clusters change over time and how
>> they are growing (or not).
>>
>> Users can add a 'description' and 'email' to their cluster if they want
>> to so that we can find out more about the system.
>>
>> The 'public' flag controls if a user wants this data to be public or
>> not. This part still has to be written, but I imagine a website on
>> brag.ceph.com where you can 'brag' about your Ceph cluster and show it
>> off to the rest of the world.
>>
>> Right now the aim is to collect data (opt-in!) and use that to improve
>> the project.
>>
>> Questions which still need to be answered are:
>>
>> - Who hosts brag.ceph.com?
>> - Who has access to the data on brag.ceph.com?
>>
>> For now I'd like to get feedback on the idea and the module and see
>> where it can be improved.
>>
>> Feedback, suggestions, flames are welcome!
>>
>> Wido
>>
>> [0]: https://github.com/wido/ceph/tree/mgr-brag
>> [1]: http://flask.pocoo.org/
>>
>> {
>>   "fs": {
>>     "count": 1
>>   },
>>   "description": "My test cluster",
>>   "created": "2018-02-26T10:01:27.790360",
>>   "osd": {
>>     "count": 3,
>>     "require_min_compat_client": "jewel",
>>     "require_osd_release": "luminous"
>>   },
>>   "usage": {
>>     "total_objects": 208,
>>     "total_used_bytes": 3428843520,
>>     "pg_num:": 160,
>>     "total_bytes": 32199671808,
>>     "pools": 6,
>>     "total_avail_bytes": 28770828288
>>   },
>>   "contact": null,
>>   "mon": {
>>     "count": 3,
>>     "features": {
>>       "optional": [],
>>       "persistent": [
>>         "kraken",
>>         "luminous"
>>       ]
>>     }
>>   },
>>   "pools": [
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 8,
>>       "pgp_num": 8,
>>       "type": 1,
>>       "pool": 1,
>>       "size": 3
>>     },
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 8,
>>       "pgp_num": 8,
>>       "type": 1,
>>       "pool": 2,
>>       "size": 3
>>     },
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 8,
>>       "pgp_num": 8,
>>       "type": 1,
>>       "pool": 3,
>>       "size": 3
>>     },
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 8,
>>       "pgp_num": 8,
>>       "type": 1,
>>       "pool": 4,
>>       "size": 3
>>     },
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 64,
>>       "pgp_num": 64,
>>       "type": 1,
>>       "pool": 5,
>>       "size": 3
>>     },
>>     {
>>       "crush_rule": 0,
>>       "min_size": 2,
>>       "pg_num": 64,
>>       "pgp_num": 64,
>>       "type": 1,
>>       "pool": 6,
>>       "size": 3
>>     }
>>   ],
>>   "organization": null,
>>   "public": false,
>>   "fsid": "d40e7248-ef94-438c-a771-f40c34e2e2ba",
>>   "metadata": {
>>     "osd": {
>>       "distro_description": {
>>         "Ubuntu 16.04.3 LTS": 3
>>       },
>>       "rotational": {
>>         "0": 3
>>       },
>>       "kernel_version": {
>>         "4.13.0-36-generic": 3
>>       },
>>       "arch": {
>>         "x86_64": 3
>>       },
>>       "cpu": {
>>         "Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz": 3
>>       },
>>       "osd_objectstore": {
>>         "bluestore": 3
>>       },
>>       "kernel_description": {
>>         "#40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018": 3
>>       },
>>       "os": {
>>         "Linux": 3
>>       },
>>       "ceph_version": {
>>         "ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949)
>> luminous (stable)": 3
>>       },
>>       "distro": {
>>         "ubuntu": 3
>>       }
>>     },
>>     "mon": {
>>       "distro_description": {
>>         "Ubuntu 16.04.3 LTS": 3
>>       },
>>       "kernel_version": {
>>         "4.13.0-36-generic": 3
>>       },
>>       "arch": {
>>         "x86_64": 3
>>       },
>>       "cpu": {
>>         "Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz": 3
>>       },
>>       "kernel_description": {
>>         "#40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018": 3
>>       },
>>       "os": {
>>         "Linux": 3
>>       },
>>       "ceph_version": {
>>         "ceph version 12.2.3 (2dab17a455c09584f2a85e6b10888337d1ec8949)
>> luminous (stable)": 3
>>       },
>>       "distro": {
>>         "ubuntu": 3
>>       }
>>     }
>>   }
>> }
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux