ZFS + GlusterFS raid5 low read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I am currently working on a project for which I am using:
  • 3 storage nodes connected with Omnipath
  • 6 sata 750GB HDD per node (total of 18 disks)

I created a ZFS raidZ1 on each node (5 disks + 1) and used GlusterFS in raid5 mode between the 3 nodes.

Unfortunately, with (very) big files, I experience a quite low read-performance, compared to write-perf (write=700MB/s while read=320 MB/s).

Do you know tuning/optimization parameters which could help me get better read-performance ?


Here's more information on the configuration:

ZFS raidz1 on each node:

# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank  4,06T  10,5M  4,06T         -     0%     0%  1.00x  ONLINE  -
# zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:
    NAME        STATE     READ WRITE CKSUM
    tank        ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sda     ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdf     ONLINE       0     0     0
errors: No known data errors

The command used to create the volume:

# zpool create -f tank raidz sda sdb sdc sdd sde sdf

When running IOR on each node, I get about write perf = 460 MB/s and read perf = 430 MB/s (writing 1024 GiB with zfer_size=16MiB).


GlusterFS raid5 through TCP (IPoIB) between the 3 nodes:

# gluster volume create raid55 disperse 3 redundancy 1 sm01.opa:/tank/ec_point sm02.opa:/tank/ec_point sm03.opa:/tank/ec_point

There is a big difference between the read-performance on each ZFS node and with Gluster.

Thanks in advance :)

Yann


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux