On Sun, Jul 28, 2013 at 11:32 PM, Bryan Whitehead <driver at megahappy.net>wrote: > Weekend activities kept me away from watching this thread, wanted to > add in more of my 2 cents... :) > > Major releases would be great to happen more often - but keeping > current releases "more current" is really what I was talking about. > Example, 3.3.0 was a pretty solid release but some annoying bugs got > fixed and it felt like 3.3.1 was reasonably quick to come. But that > release seemed to be a step back for rdma (forgive me if I was wrong - > but I think it wasn't even possible to fuse/mount over rdma with 3.3.1 > while 3.3.0 worked). But 3.3.2 release took a pretty long time to come > and fix that regression. I think I also recall seeing a bunch of nfs > fixes coming and regressing (but since I don't use gluster/nfs I don't > follow closely). > Bryan - yes, point well taken. I believe a dedicated release maintainer role will help in this case. I would like to hear other suggestions or thoughts on how you/others think this can be implemented. > > What I'd like to see: > In the -devel maillinglist right now I see someone is showing brick > add / brick replace in 3.4.0 is causing a segfault in apps using > libgfapi (in this case qemu/libvirt) to get at gluster volumes. It > looks like some patches were provided to fix the issue. Assuming those > patches work I think a 3.4.1 release might be worth being pushed out. > Basic stuff like that on something that a lot of people are going to > care about (qemu/libvirt integration - or plain libgfapi). So if there > was a scheduled release for say - every 1-3 months - then I think that > might be worth doing. Ref: > http://lists.gnu.org/archive/html/gluster-devel/2013-07/msg00089.html > Right, thanks for highlighting. These fixes will be back ported. I have already submitted the backport of one of them for review at http://review.gluster.org/5427. The other will be backported once reviewed and accepted in master. Thanks again! Avati The front page of gluster.org says 3.4.0 has "Virtual Machine Image > Storage improvements". If 1-3 months from now more traction with > CloudStack/OpenStack or just straight up libvirtd/qemu with gluster > gets going. I'd much rather tell someone "make sure to use 3.4.1" than > "be careful when doing an add-brick - all your VM's will segfault". > > On Sun, Jul 28, 2013 at 5:10 PM, Emmanuel Dreyfus <manu at netbsd.org> wrote: > > Harshavardhana <harsha at harshavardhana.net> wrote: > > > >> What is good for GlusterFS as a whole is highly debatable - since there > >> are no module owners/subsystem maintainers as of yet at-least on paper. > > > > Just my two cents on that: you need to make clear if a module maintainer > > is a dictator or a steward for the module: does he has the last word on > > anything touching his module, or is there some higher instance to settle > > discussions that do not reach consensus? > > > > IMO the first approach creates two problems: > > > > - having just one responsible person for a module is a huge bet that > > this person will have good judgments. Be careful to let a maintainer > > position open instead of assigning it to the wrong person. > > > > - having many different dictators each ruling over a module can create > > difficult situations when a proposed change impacts many modules. > > > > -- > > Emmanuel Dreyfus > > http://hcpnet.free.fr/pubz > > manu at netbsd.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130730/b5763263/attachment.html>