----- Original Message ----- > From: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx> > To: "Krutika Dhananjay" <kdhananj@xxxxxxxxxx> > Cc: "gluster-users" <gluster-users@xxxxxxxxxxx> > Sent: Wednesday, October 28, 2015 1:08:33 PM > Subject: Re: Shard Volume testing (3.7.5) > > > On 28 October 2015 at 17:03, Krutika Dhananjay < kdhananj@xxxxxxxxxx > wrote: > > > > So sharding also helps with better disk utilization in distributed-replicated > volumes for large files (like VM images). > .. > > There are other long-term benefits one could reap from using sharding: for > instance, for someone who might want to use tiering in VM store use-case, > having sharding will be beneficial in terms of only migrating the shards > between hot and cold tiers, as opposed to moving large files in full, even > if only a small portion of the file is changed/accessed. :) > > > Interesting points, thanks. > > > > > > > > > > > Yes. So Paul Cuzner and Satheesaran who have been testing sharding here have > reported better write performance with 512M shards. I'd be interested to > know what you feel about performance with relatively larger shards (think > 512M). > > Seq Read speeds basically tripled, and seq writes improved to the limit of > the network connection. > > OK. And what about the data heal performance with 512M shards? Satisfactory? > > > Easily satisfactory, a bit slower than the 4MB shard but still way faster > than a full multi GB file heal :) > > > Something I have noticed, is that the heal info (gluster volume heal > <datastore> info) can be very slow to return, as in many 10's of seconds - > is there a way to speed that up? > Yes, there is a way to speed it up. Basically the process of finding out whether a file needs heal or not takes some time, leading to slow heal info. This decision making can be done in a faster way. I'm working on the approach and will send a patch in the coming days. > It would be every useful if there was a command that quickly gave > summary/progress status, e.g "There are <X> shards to be healed" > > > -- > Lindsay > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users -- Thanks, Anuradha. _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users