On Wed, Feb 22, 2012 at 12:07 PM, Shain Miley <smiley at npr.org> wrote: > Are you able to talk about the performance you have been seeing using this > setup? > > I have been testing zfs + gluster using some kvm vm's, and I have found it > to be stable but kind of slow. > > However I believe that the cause of my slowness has more to do with my > setup, not necessarily anything to do with zfs or gluster (outside of the > fact they both use fuse and thus introduce some additional overhead). I posted some of my early experience back in October http://permalink.gmane.org/gmane.comp.file-systems.gluster.user/7445 gluster performance is quite reasonable and given that we use lots of 5400 rpm drives zfs performance is really great. I If you are able to use PCI SSD as L2ARC you can get a very snappy file system. I wanted to see if we could build a supportable production storage system for under $200/TiB usable and it appears that this is actually possible. dipe > > Thanks, > > Shain > > > On 02/22/2012 10:46 AM, Dipeit wrote: >> >> we have this running with zfsonlinux and glusterfs 3.2.5 and are using a >> 60TB volume across 3 storage server. In the last 6 months we had one >> unexplained reboot of one of these. Cause unknown. Other than that it was >> fast and stable >> >> Dipe >> >> On Feb 22, 2012, at 5:26 AM, Germain Maurice<gmaurice at linkfluence.net> >> ?wrote: >> >>> Hi everybody, >>> >>> I'm looking for information about using GlusterFS with ZFS. I got >>> information that talked about a sort of incompatibility between the both >>> technologies because of unsupported xattr feature in ZFS. >>> >>> What are the latest news about this ? >>> >>> Thank you in advance. >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >> > >