Re: Some updates on the eventing framework for Gluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Samikshan,
As we had discussed in the past about "Eventing on the storage cluster, be it Gluster or Ceph, is one of the key features through which management stations can get updates about specific events immediately than waiting for a poll interval cycle". So, we have put forth an overall architecture for Eventing framework

We in USM / SkyRing [1] project are using SALT event bus. I have attached the slide, which gives a summarized view of the Eventing framework in USM. We have already done a decent amount of implementation ( from node events point of view ) to get events from the nodes in Ceph cluster and done a POC for Gluster as well. It would be good to be in synch, on the Event bus and how it can be consumed by not only Management application like USM, but also by the other entities in the cluster if required.

-Dusmant

[1] https://github.com/skyrings/skyring

On 12/02/2015 06:08 AM, Samikshan Bairagya wrote:
Hi,

The updates for the eventing framework for gluster can be divided into the following two parts.

1. Bubbling out notifications through dbus signals from every gluster node.

* The 'glusterfs' module in storaged [1] exports objects on the system bus for every gluster volume. These objects hold the following properties:
- Name
- Id
- Status (0 = Created, 1 = Started, 2 = Stopped)
- Brickcount

* A singleton dbus object corresponding to glusterd is also exported by storaged on the system bus. This object holds properties to track the state of glusterd (LoadState and ActiveState).

2. Aggregating all these signals from each node over an entire cluster.

* Using Kafka [2] for messaging over a cluster: Implementing a (dbus signal) listener in python that converts these dbus signals from objects to 'keyed messages' in Kafka under a particular 'topic'.

For example, if a volume 'testvol' is started, a message is published under topic 'testvol', with 'status' as the 'key' and the changed status ('1' in this case) as the 'value'.


*** Near term plans:
- Export dbus objects corresponding to bricks.
- Figure out how to map the path to the brick directory to the block device and consequently the drive object. The 'SmartFailing' property from org.storaged.Storaged.Drive.Ata [3] interface can then be used to track brick failures. - Make the framework work over a multi-node cluster with possibly a multi-broker kafka setup to identify redundancies as well as to keep consistent information across the cluster.

Views/feedback/queries are welcome.

[1] https://github.com/samikshan/storaged/tree/glusterfs
[2] http://kafka.apache.org/documentation.html#introduction
[3] http://storaged-project.github.io/doc/latest/gdbus-org.storaged.Storaged.Drive.Ata.html#gdbus-property-org-storaged-Storaged-Drive-Ata.SmartFailing

Thanks and Regards,

Samikshan
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



Attachment: USMEventingArchitectureV5.odp
Description: application/vnd.oasis.opendocument.presentation

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux