GlusterFS Vs NFS Read Performance confusing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



we're conducting performance  benchmark runs to evaluate Linux performance
as NFS file servers.
It is observed that an unusual high percentage of benchmark time was spent
in "read" operation.
A sampled workload consisting of 18% of read consumes 63% of total benchmark
time. Did this
problem get analyzed before (or even better :)-is there a patch) ? We're on
2.4.19 kernel- NFS
V3 - UDP, with EXT3 as local file system.

Thanks in advance.

gluster-users at gluster.org

Dear All,

we are currently using NFS to meet data sharing requirements.Now we are
facing  some performance and scalability problem ,so this form does not meet
the requirements of our network(performance).So we are finding the possible
solutions to increase the performance and scalability .To give very strong
solution to NFS issue I have analysed two File System one is GlusterFS and
another one is Red Hat GFS.we conclude that GlusterFS will increase the
performance and scalability ,It has all the features we are looking .For the
testing purpose I am benchmarking NFS and GlusterFS to get better
performance .My benchmark result shows that GlusterFS give better
performance ,but i am getting some unacceptable read performance . I am not
able to understand how exactly the read operation performs NFS and GlusterFS
.even I don't know anything i am doing wrong.here i am showing the benchmark
result to get better idea of my read performance issuee .i have attached the
result of NFS and GlusterFS read  values .any one can please go thro this
and give me some valuable guide .It will make my benchmarking very effective
.

This my server and client Hardware and software :

HARDWARE CONFIG:

Processor core speed  : Intel(R) Celeron(R) CPU 1.70GHz

Number of cores  : Single Core (not dual-core)

RAM size  : 384MB(128MB+256MB)

RAM type  : DDR

RAM Speed  : 266 MHz (3.8 ns)

Swap  : 1027MB

Storage controller  : ATA device

Disk model/size  : SAMSUNG SV4012H /40 GB,2 MB Cache,

Storage speed  : 52.4 MB/sec

Spindle Speed  : 5400 rpm(Revolution per Minute)

NIC Type  : VIA Rhine III chipset IRQ 18

NIC Speed  : 100 Mbps/Full-Duplex Card

SOFTWARE:

Operation System : Fedora Core 9 GNU/Linux

Linux version  : 2.6.9-42

Local FS  : Ext3

NFS version  : 1.1.2

GlusterFS version: glusterfs 1.3.8 built on Feb 3 2008

Iozone  : iozone-3-5.fc9.i386 (File System Benchmark Tool)

ttcp  : ttcp-1.12-18.fc9.i386(RAW throughput measurement Tool)

This is the server and client vol files i am using the benchmarking

#GlusterFS Server Volume Specification

volume brick
  type storage/posix                   # POSIX FS translator
  option directory /bench        # /bench dir contains 25,000 files with
size 10 KB 15KB
end-volume

volume iot
  type performance/io-threads
  option thread-count 4
  option cache-size 8MB
  subvolumes brick
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  subvolumes iot
  option auth.ip.brick.allow * # Allow access to "brick" volume
end-volume



# GlusterFS Client Volume Specification

volume client
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.xxx.x.xxx
  option remote-subvolume brick
end-volume

volume readahead
  type performance/read-ahead
  option page-size 128KB     # 256KB is the default option
  option page-count 4     # cache per file  = (page-count x page-size)  2 is
default option
  subvolumes client
end-volume

volume iocache
  type performance/io-cache
  #option page-size 128KB   ## default is 32MB
  option cache-size 256MB  #128KB is default option
  option page-count 4
  subvolumes readahead
end-volume

volume writeback
  type performance/write-behind
  option aggregate-size 128KB
  option flush-behind on
  subvolumes iocache
end-volume


I am confusing this result .I don't have idea how to trace and get good
comparable result is read performance .I think I am miss understanding the
buffer cache concepts .



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux