Hi Pranith, Sure I can test it but not on physical nodes, doing tests on VMs ok for you? But at the end I can upgrade to 3.7.13 and do tests on physical nodes too. On Thu, Apr 14, 2016 at 2:04 PM, Pranith Kumar Karampuri <pkarampu@xxxxxxxxxx> wrote: > > > On 04/14/2016 04:15 PM, Serkan Çoban wrote: >>>>> >>>>> We are almost done backporting multi-threaded self-heal. I am in the >>>>> process of writing a blog-post to give full idea about this one. >> >> Will multi thread self heal work for disperse volumes? what is the blog >> address? > > > hi Serkan, > At the moment we are enabling multi-threaded self-heal for only > replicate volumes, mainly because we didn't get enough time to test the > changes with disperse volumes. Do you want to help out with testing > multi-threaded heal in disperse volumes? I can provide a patch which does > this. We can may be target this for 3.7.13 based on your inputs? > > Pranith > >> >> On Thu, Apr 14, 2016 at 12:44 PM, Pranith Kumar Karampuri >> <pkarampu@xxxxxxxxxx> wrote: >>> >>> >>> On 04/14/2016 09:46 AM, Lindsay Mathieson wrote: >>>> >>>> Sorry to bring this up again, but I never did figure out the right >>>> settings for this. >>>> >>>> If I reboot a gluster node for a rep 3 volume, what are the settings >>>> for maximising heal speed, assuming ones not worried about i/o or cpu >>>> >>>> Gluster 3.7.9 >>>> Sharded volume (4MB) >>> >>> The only benefit you get over replicate volumes without sharding is that >>> only the shards which changed when the brick rebooted are healed instead >>> of >>> whole VM image. Apart from that I don't see any settings that will >>> improve >>> heal speed. You wait for 3.7.12. We are almost done backporting >>> multi-threaded self-heal. I am in the process of writing a blog-post to >>> give >>> full idea about this one. >>> >>> Pranith >>>> >>>> >>>> thanks, >>>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users@xxxxxxxxxxx >>> http://www.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users