Gluster.Next Design discussion - Event report

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you have missed out attending the design discussion we had over
couple of days last week, here is the event report.

Day 1, 28th September 2015

*DHT 2*

We started the design discussion with DHT2 where Shyam explained the
motivation behind DHT2, the current pitfalls with the scalability
requirements. He touched base on core principals and concepts like no
duplication of directories, centralized & granular layouts. On disk
format was also discussed in details. In between we also had questions
coming from the community and Shyam addressed them periodically. Some
flowcharts were also covered to discuss about how the fops will flow
with DHT2 scheme. A significant amount of time was also spent in
discussing the impact on other translators like Posix, Quota, Changelog
etc. DHT hangout session is available at [1]

*Heketi*

Post lunch we begun with a session on Heketi [2] - an intelligent,
on-demand, automated GlusterFS volume manager. Luis Pabon explained that
why Heketi will play a crucial role to position Gluster as a cloud
storage system. The architecture of Heketi [3] was discussed in detail
and a small Heketi demo was much appreciated. Luis also pointed out that
in near term the resilience has to be taken care by admin, as a future
goal, Heketi needs to ensure its own resiliency. Heketi also has a
future plan to handle events to take care of failures. The first cut of
Heketi has been released few days back and is available for use.

*GlusterD 2.0*

After an eventful discussion around Heketi we moved to GlusterD 2.0 and
KP discussed motivations behind (re)designing GlusterD. The existing
design doesn't scale well as number of nodes in the cluster increase.
The amount of configuration data is exchanged where a new node is added
to the cluster is quadratic in number of nodes. The configuration store
is replicated on all the nodes in the cluster and is not guaranteed to
be consistent. Replicating on all nodes doesn't scale well with increase
in number of nodes. GlusterD 2.0 is focused on making
the configuration store resilient to nodes failing and scale with
increase in number of nodes. It will also make integration of existing
and new feature specific commands (say quota-limit-usage) would be made
simpler and separate from internals of GlusterD. It was decided that
GlusterD team will send out a proposal for an interface that feature
specific commands need to implement.

Hangout recording for Heketi & GlusterD 2.0 is available at [4]


Day 2, 29th September 2015

*NSR*

The NSR discussion kicked off with a background on the project, the use
cases behind it, before deep diving into the project. Jeff spoke at
length about the basic principles on which NSR is based, and then moved
on to explain the various architectural components of NSR. He explained,
about the journal, the terms, NSR client, before handing it over to
Avra, who gave a walk through of NSR server, and the journal states.
Jeff resumed the forum with talks about reconciliation, and we had an
open table discussion about in-memory journal view, and the discussions
ended with how NSR can provide flexible consistency, depending on the
use case. You could watch the entire discussion at [5]

*Gluster Eventing*

Post lunch the discussion on Eventing framework started with Samikshan
giving an overview of StorageD, DBus and a list of events this framework
is aiming to support. He then spoke of the architecture of how StorageD
can retrieve Gluster states from individual nodes and expose them as
DBus objects implementing corresponding interfaces. Hook scripts would
be used to notify StorageD of changes on Gluster front so that StorageD
can update itself and send out necessary change notifications from
individual nodes. There were questions regarding how these events from
individual nodes could be converted to one stream of events for the
entire cluster. Samikshan will be looking at event buses like Salt to
address this aspect. Watch this discussion offline at [6]

If you have any questions on these, feel free to reach us.

Regards,
Gluster.Next team

[1] https://www.youtube.com/watch?v=HM_0PeG0tFI
[2] https://github.com/heketi/heketi
[3] https://github.com/heketi/heketi/wiki/Architecture
[4] https://www.youtube.com/watch?v=iBFfHv4bne8
[5] https://www.youtube.com/watch?v=oa7468Rfsbw
[6] https://www.youtube.com/watch?v=ToWwfBKxWCQ
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux