Hi, I have the same problem. I have 2 servers, exporting a replicated gluster volume to 3 gluster native clients. I use the gluster volume as a repository for qcow2 (kvm) virtual machine images. The virtual machines runs on the gluster clients with no problems and live migration works great. If one of the 2 servers goes offline and then comes back, all the virtual machines are stuck until self healing completes because of high load on servers during files reconstruction. Is there a chance to make self healing less invasive ? Cheers, Rosario Il 24/03/2011 23:56, R.C. ha scritto: > Hi to everyone. > > Experimenting with GlusterFS, my first intent is to evaluate the > possibility to create an affordable SAN storage for various environments > (datastore, VM disk images and so on...). > > Looking at my tests results, my first concern is about system > performance during self-heal process. > > In a replica 3 volume (or replica 2, by the way), when a node goes > offline (wherever the problem lies) and than comes back, the self-heal > process eats a lot of system resources but, and this is the main > problem, the volume becomes quite unusable. > During self-heal time, writing data to the cluster (say thru SAMBA) > reaches speeds in the 100KB/s order: quite unacceptable for a SAN > storage (neither for a simple NAS storage, anyway). > > Is there a way to "move to background" the self-heal process and retain > client writing (and reading) speeds acceptable? > > Raf > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >