Re: Ceph 'brag' Manager Module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2018-03-26T08:39:52, Wido den Hollander <wido@xxxxxxxx> wrote:

Hi all,

this is all pretty awesome and I'm excited to understand our use cases
better.

Though I'd also publicly recommend to name this "telemetry" instead of
"brag". ;-)

There's some very interesting work to reference what kind of insights we
could glean from this - compare https://telemetry.mozilla.org/

I'm currently looking into the policies used to govern access to such
kind of data. https://wiki.mozilla.org/Firefox/Data_Collection has some
insights, but that merely documents the kind of data stored, not how
access privileges are handled.

(And I'm concerned that anything that implies feedback on user actions
might fall under the policies of the upcoming EU GDPR; e.g., Microsoft
telemetry has recently been determined as such and needs adjustments in
Apr 18's update cycle.)

I've reached out to Mozilla to understand they, as another OSS project,
handle this.

Also, an open JSON endpoint on the Internet, what could possibly go
wrong ... how do we ensure quality and authenticity of data coming in?

Both as in protecting against malicious overload, as well as someone
injecting bad data maliciously for existing clusters.

If updates were signed with a recurring key, for example, it could be
determined automatically that it's indeed always the same cluster
submitting, and it could also serve to control access to that data (such
as rights to delete).

> > Very cool!  I've left some comments on the commit.
> > 
> > I anticipate wanting various per-subsystem additions to this over time
> > for things like flagging which cephfs features are enabled.  We could
> > also use the pool tags to report whether a system is using RGW/RBD.

This should be possible if we transfer the pool application type.

> >>       "rotational": {
> >>         "0": 3
> >>       },

I'd rather get more explicit dumps of the actual OSD tree & OSD Meta
rather than aggregated here already. We may also want details on OSD
variance and weights.

On the other hand, the "crush_rule" one is not so useful without knowing
what the rules are, I feel.


Regards,
    Lars

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux