No I saw a patch to have it behave like this, but I can't find it right now. On 6/28/12 6:54 AM, Tim Bell wrote: > Assuming that we use a 3 copy approach across the hypervisors, does Gluster > favour the local copy on the hypervisor if the data is on > distributed/replicated ? > > It would be good to avoid the network hop when the data is on the local > disk. > > Tim > >> -----Original Message----- >> From: gluster-users-bounces at gluster.org [mailto:gluster-users- >> bounces at gluster.org] On Behalf Of Fernando Frediani (Qube) >> Sent: 28 June 2012 11:43 >> To: 'Nicolas Sebrecht'; 'Thomas Jackson' >> Cc: 'gluster-users' >> Subject: Re: about HA infrastructure for hypervisors >> >> You should indeed to use the same server running as a storage brick as a >> KVM host to maximize hardware and power usage. Only thing I am not sure >> is if you can limit the amount of host memory Gluster can eat so most of > it >> gets reserved for the Virtual Machines. >> >> Fernando >> >> -----Original Message----- >> From: gluster-users-bounces at gluster.org [mailto:gluster-users- >> bounces at gluster.org] On Behalf Of Nicolas Sebrecht >> Sent: 28 June 2012 10:31 >> To: Thomas Jackson >> Cc: 'gluster-users' >> Subject: Re: about HA infrastructure for hypervisors >> >> The 28/06/12, Thomas Jackson wrote: >> >>> Why don't you have KVM running on the Gluster bricks as well? >> Good point. While abtracting we decided to seperate KVM & Gluster but I >> can't remember why. >> We'll think about that again. >> >>> We have a 4 node cluster (each with 4x 300GB 15k SAS drives in >>> RAID10), 10 gigabit SFP+ Ethernet (with redundant switching). Each >>> node participates in a distribute+replicate Gluster namespace and runs >>> KVM. We found this to be the most efficient (and fastest) way to run the >> cluster. >>> This works well for us, although (due to Gluster using fuse) it isn't >>> as fast as we would like. Currently waiting for the KVM driver that >>> has been discussed a few times recently, that should make a huge >>> difference to the performance for us. >> Ok! Thanks. >> >> -- >> Nicolas Sebrecht >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20120628/5b7bcf8d/attachment-0001.htm>