GlusterFS performance for random file acess

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear GlusterFS developers,

I'm considering using GlusterFS on our ner parallel (two nodes, 10Gb Ethernet) centralized fileserver for our HPC clusters (several small ones, tens of CPUs). So I did performance tests for the latest GlusterFS as well as plain NFS and the recent Lustre-1.6.3.

The GlusterFS looks very attractive because I understand that unlike Lustre, one could use it for non-x86 Linux platforms as well, which we might have gotted in the near future.

So I did a Bonnie++ benchmark using one of the servers (Dual Opteron, 4GB RAM, SATA disk, CentOS Linux 5) and a client (old P4 2.4GHz box, 512MB RAM, Gigabit Ethernet, CentOS 4.5). I used 8Gb size for the Bonnie++ tests, and tried either forced flush IO or not (-b option; the data below are for the latter case). Some of the Bonnie++ results are like this:

===============================================
 FileSystem:     Sequential Output , K/sec
                 Per-char    Block     Rewrite
===============================================
 NFS               14442      30419	 7710
 Lustre            16012      35228     19018
 GlusterFS         16582      15833      8358	
 GlusterFS, wb     17988      43774      8409
 GlusterFS, ra     18414      15863      1804
 GlusterFS, ra, wb 22403      41821       355
===============================================
FileSystem:     Sequential Input, K/sec Random
                 Per-char    Block      seeks, #/s
===============================================
 NFS               20229      49510      178.8
 Lustre            17284      47753       53.0
 GlusterFS         16791      16815      161.4
 GlusterFS, wb     15304      17438      174.1
 GlusterFS, ra     19420      54803      143.3
 GlusterFS, ra, wb 19900      54427      144.4
===============================================

Without performance translators, GlusterFS was as good as non-buffered IO/ At the same time, Rewrite and Seek tests were OK (about 8000 K/s and 170 seeks/s).

Then I applied read-ahead and write-behind translators on the client side. Blocked reads and writes reached the same or better level as of NFS or Lustre; but the Rewrite test of Bonnie++ became much worse (an order of magnitude, actually, below 800 K/s). And there is no significant fall in Seek test, so I guess the bad Rewrite results are related to how the wb and ra translators do write and read, not seek.

So, could you advice me, whether there is a solution for this -- can I have it both ways with GlusterFS, good IO bandwidth and fast rewrite? And if yes, how to tune it? Thank you very much!


--
Best regards,
Grigory Shamov
Kazan Science Centre of RAS,
Kazan, Russian Federation






[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux