Re: Gluster Community Newsletter, February 2016

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 23, 2016 at 1:12 AM, Amye Scavarda <amye@xxxxxxxxxx> wrote:
> What a busy month this past month for Gluster!
> We’ve got updates from SCaLE, FOSDEM, our Developer Gatherings in
> Brno, DevConf, noteworthy threads from the mailing lists, and upcoming
> events.
> This post is also available on the Gluster blog:
> http://blog.gluster.org/2016/02/gluster-community-newsletter-february-2015/
>
> From SCaLE:
> - Richard Wareing gave a talk at the Southern California Linux Expo
> about Scaling Gluster at Facebook. More at
> https://blog.gluster.org/2016/01/scaling-glusterfs-facebook/
>
> From FOSDEM:
> Humble and Kaushal have posted thoughts on FOSDEM
> http://website-humblec.rhcloud.com/me-fosdem-2016/
> https://kshlm.in/fosdem16/
>
> From Developer Gatherings:
> We had a group of developers gather in Brno ahead of DevConf to
> discuss a number of different Gluster related things.
> Highlights were:
> GD2 with Kaushal - https://public.pad.fsfe.org/p/gluster-gd2-kaushal

We discussed volgen for GD2 in this meeting. I've put up a summary of
the discussion and outcomes in the etherpad. I'll be putting up the
same as a design spec into glusterfs-spec soon and keep it updated as
we progress.

> Heketi & Eventing with Luis - https://public.pad.fsfe.org/p/gluster-heketi
> DHT2 with Venky-  https://public.pad.fsfe.org/p/gluster-4.0-dht2
>
> From DevConf
> Ceph vs Gluster vs Swift: Similarities and Differences
> https://devconfcz2016.sched.org/event/5lze/ceph-vs-gluster-vs-swift-similarities-and-differences
> Prashanth Pai, Thiago da Silva
>
> Automated GlusterFS Volume Management with Heketi
> https://devconfcz2016.sched.org/event/5m0P/automated-glusterfs-volume-management-with-
> heketi   - Luis Pabon
> NFS-Ganesha and Distributed Storage Systems -
> https://devconfcz2016.sched.org/event/5m15/nfs-ganesha-and-distributed-storage-systems
> -   Kaleb S. Keithley
>
> Build your own Scale-Out Storage with Gluster
> https://devconfcz2016.sched.org/event/5m1X/build-your-own-scale-out-storage-with-gluster
> 0 Niels de Vos
>
> Freak show (#2): CTDB -- Scaling The Aliens Back To Outer Space
> https://devconfcz2016.sched.org/event/5m1l/freak-show-2-ctdb-scaling-the-aliens-back-to-outer-space
> Günther Deschner, Michael Adam
>
> oVirt and Gluster Hyperconvergence
> https://devconfcz2016.sched.org/event/5m20/ovirt-and-gluster-hyperconvergence
> Ramesh Nachimuthu
>
> Improvements in gluster for virtualization usecase
> https://devconfcz2016.sched.org/event/5m1p/improvements-in-gluster-for-virtualization-usecase
> Prasanna Kumar Kalever
>
> Test Automation and CI using DiSTAF
> https://devconfcz2016.sched.org/event/5m1U/test-automation-and-ci-using-distaf
> Vishwanath Bhat
> Gluster Developer Gatherings at Brno before DevConf
>
>
>  Noteworthy threads:
>  Soumya Koduri investigates the issue of memory leaks in GlusterFS FUSE
> client and suggests a re-run after application of a few specific
> patches. More at
> <https://www.gluster.org/pipermail/gluster-users/2016-January/024775.html>.
> Oleksandr reported that it did not make an impact; Xavier confirmed a
> similar issue with 3.7.6 release
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024932.html>).
> The thread (at <https://www.gluster.org/pipermail/gluster-users/2016-January/thread.html#24775>)
> is a good read around the topic of understanding how to work through
> diagnosis and fixes of memory leaks.
>
> Sachidananda provided an update
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024790.html)
> about the gdeploy v2.0 including design changes
> (<https://github.com/gluster/gdeploy/blob/master/doc/gdeploy-2) to
> enable modularity and separation of core functionality into
> self-contained unit.
>
> Kyle Harris reported an issue around high I/O and processor
> utilization (<https://www.gluster.org/pipermail/gluster-users/2016-January/024811.html>).
> Ravishankar, Krutika and Pranith worked with the reporter to identify
> specific ways to address the topic. Pranith indicated that a 3.7.7
> release is coming up soonest
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024836.html>)
>
> A query was raised about the 3.6.8 release notes
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024820.html>)
> and a suggestion to include them at
> <http://download.gluster.org/pub/gluster/glusterfs/> Niels responded
> stating that the notes should be part of the repository at
> <https://github.com/gluster/glusterfs/tree/release-3.6/doc/release-notes>
> and added the release manager to provide additional detail
>
> Vijay provided an update around the changes being discussed for 3.8
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024828.html>).
> The maintainers feel it is worthwhile to include some of the key
> features for 4.0 eg. NSR, dht2, glusterd2.0 as experimental in the
> release; ensure a better component coverage for tests in distaf; add a
> forward compatibility section to all the feature pages proposed for
> 3.8 in order to facilitate review for the Gluster.next features. In
> the same mail Vijay proposed that Niels de Vos would be the maintainer
> for the 3.8 release. And lastly, the projected GA date for 3.8 is now
> set to end-May or, early-June 2016.
>
> Ramesh Nachimuthu linked to a blog around designing HyperConverged
> Infrastructure using oVirt 3.6 and Gluster 3.7.6
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024849.html>).
> More at <http://blogs-ramesh.blogspot.in/2016/01/ovirt-and-gluster-hyperconvergence.html>
>
> Lindsay Mathieson brought up the topic of 'File Corruption when adding
> bricks to live replica volumes'
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/024920.html>).
> Further along the discussion
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/025010.html>)
> Krutika volunteered to write in detail about the specifics of the heal
> activity and client+server side heal design.
>
> A discussion around heal processes
> (<https://www.gluster.org/pipermail/gluster-users/2016-January/025042.html>)
> led to a description of a large deployments that consists of over 1000
> clients. Again, this is a thread where debugging and diagnosis of
> GlusterFS issues in large deployments are highlighted along with the
> typical workload.
>
> DHT2:
> (<http://www.gluster.org/pipermail/gluster-devel/2016-February/048371.html>)
> A new presentation from Shyam was added around developments in DHT2.
>
> ++ gluster-devel ++
>
> Pranith and Joseph discuss
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/048006.html>)
> the issue where using ctime/mtime causes confusion for application
> software especially for backup which identifies the differences in the
> values and attempts to suggest remedies.
>
> Ravishankar introduced a proposal
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047975.html>)
> around a throttling translator on the server side to regulate FOPS. He
> believes that it will address frequently posted issues around AFR,
> Self-Heal consuming high CPU and causing resource starvation. The
> proposal plans to use the Token Bucket Filter algorithm (also used by
> bitrot) to regulate the check-sum calculation.
>
> Richard Wareing pointed
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047964.html>)
> to a bug report <https://bugzilla.redhat.com/show_bug.cgi?id=1301401>
> "Mis-behaving brick clients (gNFSd, FUSE, gfAPI) can cause cluster
> instability and eventual complete unavailability due to failures in
> releasing entry/inode locks in a timely manner".  The locks revocation
> feature was part of his talk at SCaLE14x
>
> Raghavendra Talur has initiated a thread
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/thread.html#47941>)
> on tips and tricks which can be collated and become an useful resource
> for new developers contributing to GlusterFS. Vijay added specifics
> from his work flow
> <https://www.gluster.org/pipermail/gluster-devel/2016-January/048000.html>
> and others provided insights.
>
> Niels de Vos picks up the conversation
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047903.html>)
> around closing bug reports filed against a mainline version. He
> proposed that the Bug Report Life Cycle policy could be defined and
> updated. In addition to that, the script/bugzilla query used to
> retrieve the bugs could be stored in the release-tools repository, to
> be run after each release.
>
> As a lead to his talk at FOSDEM on "Gluster Roadmap, Recent
> Improvements and Upcoming Features", Niels de Vos sought
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047862.html>)
> short description from the feature owners/developers to be able to
> include in his slide deck.
>
> Avra asked for inputs
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047841.html>)
> on a possible name for 'New Style Replication' (or, NSR) to make it
> less generic and more representative of the fundamental design idea.
>
> Kaleb provided an update
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047808.html>)
> on Python 3.5 being approved for Fedora 24 and suggested that the
> requirement for Python 2 be looked into and discussed. Prashant
> suggested usage
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047823.html>)
> of "six" module to help maintain code that will run on both Python 2
> and Python 3.
>
> Humble responded
> (<https://www.gluster.org/pipermail/gluster-devel/2016-January/047956.html>)
> that the search feature of documentation and readthedocs.org was known
> to be broken due to rearrangement of the directory structures of
> 'features' and 'feature planning' and was expected to be addressed
> soon.
>
> ++ gluster-infra ++
>
> Raghavendra Talur informed
> <https://www.gluster.org/pipermail/gluster-infra/2016-January/001790.html>
> about enabling comment based triggers on the Jenkins instance
>
> Michael Scherer informed about a Gerrit outage
> <https://www.gluster.org/pipermail/gluster-infra/2016-January/001785.html>
> in order to fix issues around index of the Lucene datastore used by
> Gerrit. Later on he was happy to note that the time taken was lesser
> than estimated and things worked according to plan.
>
> In a thread around the possibility of enabling individual accounts (on
> Jenkins) for maintainers, Michael brought up a topic of governance
> (<https://www.gluster.org/pipermail/gluster-infra/2016-January/001766.html>)
> around who can be considered a 'contributor' and is thus entitled to
> such privileges. He highlights the need for setting clear qualifying
> criteria which can then use the FreeIPA infrastructure available to
> access and privilege controls.
>
>
> Upcoming events
> FAST   - Feb 22 -  25 <https://www.usenix.org/conference/fast16>
> Vault  -  April 20-21 <http://events.linuxfoundation.org/events/vault>
>
> --
> Hey, you made it this far! Want to add things to next month's
> newsletter? Let me know!
> --
> Amye Scavarda | amye@xxxxxxxxxx | Gluster Community Lead
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux