Re :Problem in GlusterFS VS NFS Read Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All,

we are currently using NFS to meet data sharing requirements.Now we are facing  some performance and scalability problem ,so this form does not meet the requirements of our network(performance).So we are finding the possible solutions to increase the performance and scalability .To give very strong solution to NFS issue I have analysed two File System one is GlusterFS and another one is Red Hat GFS.we conclude that GlusterFS will increase the performance and scalability ,It has all the features we are looking .For the testing purpose I am benchmarking NFS and GlusterFS to get better performance .My benchmark result shows that GlusterFS give better performance ,but i am getting some unacceptable read performance . I am not able to understand how exactly the read operation performs NFS and GlusterFS .even I don't know anything i am doing wrong.here i am showing the benchmark result to get better idea of my read performance issuee .i have attached the result of NFS and GlusterFS read  values .any one can please go thro this and give me some valuable guide .It will make my benchmarking very effective . 

This my server and client Hardware and software :

HARDWARE CONFIG:

Processor core speed  : Intel(R) Celeron(R) CPU 1.70GHz

Number of cores  : Single Core (not dual-core)

RAM size  : 384MB(128MB+256MB)

RAM type  : DDR

RAM Speed  : 266 MHz (3.8 ns)

Swap  : 1027MB

Storage controller  : ATA device

Disk model/size  : SAMSUNG SV4012H /40 GB,2 MB Cache,

Storage speed  : 52.4 MB/sec

Spindle Speed  : 5400 rpm(Revolution per Minute)

NIC Type  : VIA Rhine III chipset IRQ 18

NIC Speed  : 100 Mbps/Full-Duplex Card

SOFTWARE:

Operation System : Fedora Core 9 GNU/Linux

Linux version  : 2.6.9-42

Local FS  : Ext3

NFS version  : 1.1.2

GlusterFS version: glusterfs 1.3.8 built on Feb 3 2008

Iozone  : iozone-3-5.fc9.i386 (File System Benchmark Tool)

ttcp  : ttcp-1.12-18.fc9.i386(RAW throughput measurement Tool)

This is the server and client vol files i am using the benchmarking

#GlusterFS Server Volume Specification

volume brick
  type storage/posix                   # POSIX FS translator
  option directory /bench        # /bench dir contains 25,000 files with size 10 KB 15KB
end-volume

volume iot
  type performance/io-threads
  option thread-count 4 
  option cache-size 8MB
  subvolumes brick
end-volume

volume server
  type protocol/server
  option transport-type tcp/server    
  subvolumes iot
  option auth.ip.brick.allow * # Allow access to "brick" volume
end-volume



# GlusterFS Client Volume Specification 

volume client
  type protocol/client
  option transport-type tcp/client    
  option remote-host 192.xxx.x.xxx       
  option remote-subvolume brick       
end-volume

volume readahead
  type performance/read-ahead
  option page-size 128KB     # 256KB is the default option
  option page-count 4     # cache per file  = (page-count x page-size)  2 is default option
  subvolumes client
end-volume

volume iocache
  type performance/io-cache
  #option page-size 128KB   ## default is 32MB
  option cache-size 256MB  #128KB is default option
  option page-count 4 
  subvolumes readahead
end-volume

volume writeback
  type performance/write-behind
  option aggregate-size 128KB
  option flush-behind on
  subvolumes iocache  
end-volume


I am confusing this result .I don't have idea how to trace and get good comparable result is read performance .I think I am miss understanding the buffer cache concepts .

From attached NFS read result , I understand that I have 348MB RAM  and I am doing the benchmark file size rage 128KB to 1GB .So up to file size 256MB I am getting  buffer cache performance and file size 512MB ,1GB I am getting with in  link speed .But in case of GlusterFS I not able to understand what is happening . 

Please any one can help me .



Thanks for your time
Mohan

Attachment: nfsread_Vs_GlusterFSread.ods
Description: application/vnd.oasis.opendocument.spreadsheet


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux