Re: Gluster on ZFS performance concerns

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot for sharing your results!
I'm to looking into a similar test scenario.

According to
one should apply this patch
Did you do that? Or don't you use symlinks on your glustermounts?
From your blogs I understand you didn't, did you experienced ny issues?
I like to use this scenario as a backend fro VM images.
The glusterserver mounts the glustervolumes 
and are also the hosts for the VM, sort of self describing/self providing approach.
I created an individual filesystems and along with it a separate glustervolume 
for each VM. This way I can snapshot the volumes/VMs individually.
I doubt it would make any performance difference 
if I would put all images in one Volume/zfs-Filesystem. ?
I ran several VMs in parallel and let them all perform bonnie++ tests
The whole setup seems quite stable so far
including life migrating of the VMs and reboot of the hosts during the tests.

TIA
Bernhard



*Ecologic Institute* Bernhard Glomm
IT Administration

Phone: +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype: bernhard.glomm.ecologic
Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: | YouTube: | Google+:
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH


On Dec 27, 2013, at 5:52 PM, Tony Maro <tonym@xxxxxxxxxxxxx> wrote:

Off and on for the past 8 months I've been working on setting up a rather sizable Gluster configuration on top of ZFS.  I wanted to report back my experiences.

First, I'm running Ubuntu 12.04 with zfsonlinux.org packages.  Gluster is installed from the semiosis PPA.

My test config is as follows:

Two brick Gluster in mirroring (replica 2)
Each brick has 8 TB of hard drives configured with ZFS using RAID-Z.
Each brick has a 256 GB SSD drive to use as cache for ZFS.
A third server for geo-replication slave also with RAID-Z storage.
Private Gigabit network segment for the gluster servers (and the application servers that talk to them)
Geo-rep will be placed three states away in a colo center, but is currently on the same private segment.

Data:
About 6 TB of test data, the final deploy will be much larger.  Most files are around 40MB in size.  Files are stored in directories going 4 deep, no more than 255 files in a single directory.

All data is already compressed and encrypted with AES-256 prior to storing on the filesystem, so the comrpession feature of ZFS isn't useful to me.

The default configs for ZFS caused this to be a miserable failure.  Configuring Gluster geo-replication over SSH was also a difficult task, because the documentation is wrong on many counts when compared to the version in the PPA.

With a default config, directory listings (remember, less than 255 files and folders in each directory) would take about 15 seconds to complete over the Gluster share.

By simply tweaking the following ZFS configs:
zfs set atime=off [volname]
zfs set xattr=sa [volname]
zfs set exec=off [volname]
zfs set sync=disabled [volname]

I was able to get near instantaneous directory listings and dramatically improved read performance, although I haven't bothered getting hard numbers on that.

I just wanted to pass this along.  Here's my blog posts about ZFS and Gluster, which pretty much says the same thing I just did but offers more about installing ZFS:


-Tony
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux