No subject

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



doing the benchmark file size rage 128KB to 1GB .So up to file size 256MB I
am getting  buffer cache performance and file size 512MB ,1GB I am getting
with in  link speed .But in case of GlusterFS I not able to understand what
is happening .

Please any one can help me .

NFS :
iozone -Raceb ./perffinal.wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1


Reader
           4              8            16            32
64           128

128    744701    727625    935039    633768    499971    391433
256    920892    1085148    1057519    931149    551834    380335
512    937558    1075517    1100810    904515    558917    368605
1024    974395    1072149    1094105    969724    555319    379390
2048    1026059    1125318    1137073    1005356    568252    375232
4096    1021220    1144780    1169589    1030467    578615    376367
8192    965366    1153315    1071693    1072681    607040    371771
16384    1008989    1133837    1163806    1046171    600500    376056
32768    1022692    1165701    1175739    1065870    630626    363563
65536    1005490    1152909    1168181    1048258    631148    374343
131072    1011405    1161491    1176534    1048509    637910    375741
262144    1011217    1130486    1118877    1075740    636433    375511
524288    9563    9562    9568    9551    9525    9562
1048576    9499    9520    9513    9535    9493    9469

GlusterFS:
iozone -Raceb /root/glusterfs/perfgfs2.wks -y 4K -q 128K -n 128K -g 1G -i 0
-i 1
Reader Report
            4           8            16         32           64         128

128    48834    50395    49785    48593    48450    47959
256    15276    15209    15210    15100    14998    14973
512    12343    12333    12340    12291    12202    12213
1024    11330    11334    11327    11303    11276    11283
2048    10875    10881    10877    10873    10857    10865
4096    10671    10670    9706    10673    9685    10640
8192    10572    10060    10571    10573    10555    10064
16384    10522    10523    10523    10522    10522    10263
32768    10494    10497    10495    10493    10497    10497
65536    10484    10483    10419    10483    10485    10485
131072    10419    10475    10477    10445    10445    10478
262144    10323    10241    10312    10226    10320    10237
524288    10074    9966    9707    8567    8213    9046
1048576    7440    7973    5737    7101    7678    5743

Any idea for this higher value in NFS test .some this is different . But I
am not able to understand.





Thanks for your time
Mohan

------=_Part_259374_19852034.1230880209611
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

we're conducting performance  benchmark runs to evaluate Linux per=
formance as NFS file servers.<br>It is observed that an unusual high percen=
tage of benchmark time was spent in &quot;read&quot; operation.<br>A sample=
d workload consisting of 18% of read consumes 63% of total benchmark time. =
Did this<br>
problem get analyzed before (or even better :)-is there a patch) ? We&#39;r=
e on 2.4.19 kernel- NFS<br>V3 - UDP, with EXT3 as local file system.<br><br=
>Thanks in advance.<br><br><a href=3D"mailto:gluster-users at gluster.org">glu=
ster-users at gluster.org</a><br>
<br>Dear All,<br><br>we are currently using NFS to meet data sharing requir=
ements.Now we are facing&nbsp; some performance and scalability problem ,so=
 this form does not meet the requirements of our network(performance).So we=
 are finding the possible solutions to increase the performance and scalabi=
lity .To give very strong solution to NFS issue I have analysed two File Sy=
stem one is GlusterFS and another one is Red Hat GFS.we conclude that Glust=
erFS will increase the performance and scalability ,It has all the features=
 we are looking .For the testing purpose I am benchmarking NFS and GlusterF=
S to get better performance .My benchmark result shows that GlusterFS give =
better performance ,but i am getting some unacceptable read performance . I=
 am not able to understand how exactly the read operation performs NFS and =
GlusterFS .even I don&#39;t know anything i am doing wrong.here i am showin=
g the benchmark result to get better idea of my read performance issuee .i =
have attached the result of NFS and GlusterFS read&nbsp; values .any one ca=
n please go thro this and give me some valuable guide .It will make my benc=
hmarking very effective . <br>
<br>This my server and client Hardware and software :<br><br>HARDWARE CONFI=
G:<br><br>Processor core speed&nbsp; : Intel(R) Celeron(R) CPU 1.70GHz<br><=
br>Number of cores&nbsp; : Single Core (not dual-core)<br><br>RAM size&nbsp=
; : 384MB(128MB+256MB)<br>
<br>RAM type&nbsp; : DDR<br><br>RAM Speed&nbsp; : 266 MHz (3.8 ns)<br><br>S=
wap&nbsp; : 1027MB<br><br>Storage controller&nbsp; : ATA device<br><br>Disk=
 model/size&nbsp; : SAMSUNG SV4012H /40 GB,2 MB Cache,<br><br>Storage speed=
&nbsp; : 52.4 MB/sec<br><br>
Spindle Speed&nbsp; : 5400 rpm(Revolution per Minute)<br><br>NIC Type&nbsp;=
 : VIA Rhine III chipset IRQ 18<br><br>NIC Speed&nbsp; : 100 Mbps/Full-Dupl=
ex Card<br><br>SOFTWARE:<br><br>Operation System : Fedora Core 9 GNU/Linux<=
br><br>Linux version&nbsp; : 2.6.9-42<br>
<br>Local FS&nbsp; : Ext3<br><br>NFS version&nbsp; : 1.1.2<br><br>GlusterFS=
 version: glusterfs 1.3.8 built on Feb 3 2008<br><br>Iozone&nbsp; : iozone-=
3-5.fc9.i386 (File System Benchmark Tool)<br><br>ttcp&nbsp; : ttcp-1.12-18.=
fc9.i386(RAW throughput measurement Tool)<br>
<br>This is the server and client vol files i am using the benchmarking<br>=
<br>#GlusterFS Server Volume Specification<br><br>volume brick<br>&nbsp; ty=
pe storage/posix&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # POSIX FS translator<br>=
&nbsp; option directory /bench&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; # =
/bench dir contains 25,000 files with size 10 KB 15KB<br>
end-volume<br><br>volume iot<br>&nbsp; type performance/io-threads<br>&nbsp=
; option thread-count 4 <br>&nbsp; option cache-size 8MB<br>&nbsp; subvolum=
es brick<br>end-volume<br><br>volume server<br>&nbsp; type protocol/server<=
br>&nbsp; option transport-type tcp/server&nbsp;&nbsp;&nbsp; <br>
&nbsp; subvolumes iot<br>&nbsp; option auth.ip.brick.allow * # Allow access=
 to &quot;brick&quot; volume<br>end-volume<br><br><br><br># GlusterFS Clien=
t Volume Specification <br><br>volume client<br>&nbsp; type protocol/client=
<br>&nbsp; option transport-type tcp/client&nbsp;&nbsp;&nbsp; <br>
&nbsp; option remote-host 192.xxx.x.xxx&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 <br>&nbsp; option remote-subvolume brick&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; <br>end-volume<br><br>volume readahead<br>&nbsp; type performance/read-a=
head<br>&nbsp; option page-size 128KB&nbsp;&nbsp;&nbsp;&nbsp; # 256KB is th=
e default option<br>&nbsp; option page-count 4&nbsp;&nbsp;&nbsp;&nbsp; # ca=
che per file&nbsp; =3D (page-count x page-size)&nbsp; 2 is default option<b=
r>
&nbsp; subvolumes client<br>end-volume<br><br>volume iocache<br>&nbsp; type=
 performance/io-cache<br>&nbsp; #option page-size 128KB&nbsp;&nbsp; ## defa=
ult is 32MB<br>&nbsp; option cache-size 256MB&nbsp; #128KB is default optio=
n<br>&nbsp; option page-count 4 <br>
&nbsp; subvolumes readahead<br>end-volume<br><br>volume writeback<br>&nbsp;=
 type performance/write-behind<br>&nbsp; option aggregate-size 128KB<br>&nb=
sp; option flush-behind on<br>&nbsp; subvolumes iocache&nbsp; <br>end-volum=
e<br><br><br>I am confusing this result .I don&#39;t have idea how to trace=
 and get good comparable result is read performance .I think I am miss unde=
rstanding the buffer cache concepts .<br>
<br>From attached NFS read result , I understand that I have 348MB RAM&nbsp=
; and I am doing the benchmark file size rage 128KB to 1GB .So up to file s=
ize 256MB I am getting&nbsp; buffer cache performance and file size 512MB ,=
1GB I am getting with in&nbsp; link speed .But in case of GlusterFS I not a=
ble to understand what is happening . <br>
<br>Please any one can help me .<br><br>NFS :<br>iozone -Raceb ./perffinal.=
wks -y 4K -q 128K -n 128K -g 1G -i 0 -i 1 &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&n=
bsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; <br><br>Reader&nbsp;&nbsp;&nbsp;=
 &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp=
; &nbsp;&nbsp;&nbsp; <br>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 4&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 16&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 64&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 128<br>
<br>128&nbsp;&nbsp;&nbsp; 744701&nbsp;&nbsp;&nbsp; 727625&nbsp;&nbsp;&nbsp;=
 935039&nbsp;&nbsp;&nbsp; 633768&nbsp;&nbsp;&nbsp; 499971&nbsp;&nbsp;&nbsp;=
 391433<br>256&nbsp;&nbsp;&nbsp; 920892&nbsp;&nbsp;&nbsp; 1085148&nbsp;&nbs=
p;&nbsp; 1057519&nbsp;&nbsp;&nbsp; 931149&nbsp;&nbsp;&nbsp; 551834&nbsp;&nb=
sp;&nbsp; 380335<br>512&nbsp;&nbsp;&nbsp; 937558&nbsp;&nbsp;&nbsp; 1075517&=
nbsp;&nbsp;&nbsp; 1100810&nbsp;&nbsp;&nbsp; 904515&nbsp;&nbsp;&nbsp; 558917=
&nbsp;&nbsp;&nbsp; 368605<br>1024&nbsp;&nbsp;&nbsp; 974395&nbsp;&nbsp;&nbsp=
; 1072149&nbsp;&nbsp;&nbsp; 1094105&nbsp;&nbsp;&nbsp; 969724&nbsp;&nbsp;&nb=
sp; 555319&nbsp;&nbsp;&nbsp; 379390<br>
2048&nbsp;&nbsp;&nbsp; 1026059&nbsp;&nbsp;&nbsp; 1125318&nbsp;&nbsp;&nbsp; =
1137073&nbsp;&nbsp;&nbsp; 1005356&nbsp;&nbsp;&nbsp; 568252&nbsp;&nbsp;&nbsp=
; 375232<br>4096&nbsp;&nbsp;&nbsp; 1021220&nbsp;&nbsp;&nbsp; 1144780&nbsp;&=
nbsp;&nbsp; 1169589&nbsp;&nbsp;&nbsp; 1030467&nbsp;&nbsp;&nbsp; 578615&nbsp=
;&nbsp;&nbsp; 376367<br>8192&nbsp;&nbsp;&nbsp; 965366&nbsp;&nbsp;&nbsp; 115=
3315&nbsp;&nbsp;&nbsp; 1071693&nbsp;&nbsp;&nbsp; 1072681&nbsp;&nbsp;&nbsp; =
607040&nbsp;&nbsp;&nbsp; 371771<br>16384&nbsp;&nbsp;&nbsp; 1008989&nbsp;&nb=
sp;&nbsp; 1133837&nbsp;&nbsp;&nbsp; 1163806&nbsp;&nbsp;&nbsp; 1046171&nbsp;=
&nbsp;&nbsp; 600500&nbsp;&nbsp;&nbsp; 376056<br>
32768&nbsp;&nbsp;&nbsp; 1022692&nbsp;&nbsp;&nbsp; 1165701&nbsp;&nbsp;&nbsp;=
 1175739&nbsp;&nbsp;&nbsp; 1065870&nbsp;&nbsp;&nbsp; 630626&nbsp;&nbsp;&nbs=
p; 363563<br>65536&nbsp;&nbsp;&nbsp; 1005490&nbsp;&nbsp;&nbsp; 1152909&nbsp=
;&nbsp;&nbsp; 1168181&nbsp;&nbsp;&nbsp; 1048258&nbsp;&nbsp;&nbsp; 631148&nb=
sp;&nbsp;&nbsp; 374343<br>131072&nbsp;&nbsp;&nbsp; 1011405&nbsp;&nbsp;&nbsp=
; 1161491&nbsp;&nbsp;&nbsp; 1176534&nbsp;&nbsp;&nbsp; 1048509&nbsp;&nbsp;&n=
bsp; 637910&nbsp;&nbsp;&nbsp; 375741<br>262144&nbsp;&nbsp;&nbsp; 1011217&nb=
sp;&nbsp;&nbsp; 1130486&nbsp;&nbsp;&nbsp; 1118877&nbsp;&nbsp;&nbsp; 1075740=
&nbsp;&nbsp;&nbsp; 636433&nbsp;&nbsp;&nbsp; 375511<br>
524288&nbsp;&nbsp;&nbsp; 9563&nbsp;&nbsp;&nbsp; 9562&nbsp;&nbsp;&nbsp; 9568=
&nbsp;&nbsp;&nbsp; 9551&nbsp;&nbsp;&nbsp; 9525&nbsp;&nbsp;&nbsp; 9562<br>10=
48576&nbsp;&nbsp;&nbsp; 9499&nbsp;&nbsp;&nbsp; 9520&nbsp;&nbsp;&nbsp; 9513&=
nbsp;&nbsp;&nbsp; 9535&nbsp;&nbsp;&nbsp; 9493&nbsp;&nbsp;&nbsp; 9469<br><br=
>GlusterFS:<br>iozone -Raceb /root/glusterfs/perfgfs2.wks -y 4K -q 128K -n =
128K -g 1G -i 0 -i 1 &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbs=
p; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; <br>
Reader Report&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp=
;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; <br>&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 4&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 8&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;&nbsp; 16&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 32&nbsp;&nbsp; &=
nbsp; &nbsp; &nbsp; &nbsp; 64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; 128<br><br>128&nbsp;&nbsp;&nbsp; 48834&nbsp;&nbsp;&nbsp; 50395&nbsp;&nbs=
p;&nbsp; 49785&nbsp;&nbsp;&nbsp; 48593&nbsp;&nbsp;&nbsp; 48450&nbsp;&nbsp;&=
nbsp; 47959<br>256&nbsp;&nbsp;&nbsp; 15276&nbsp;&nbsp;&nbsp; 15209&nbsp;&nb=
sp;&nbsp; 15210&nbsp;&nbsp;&nbsp; 15100&nbsp;&nbsp;&nbsp; 14998&nbsp;&nbsp;=
&nbsp; 14973<br>
512&nbsp;&nbsp;&nbsp; 12343&nbsp;&nbsp;&nbsp; 12333&nbsp;&nbsp;&nbsp; 12340=
&nbsp;&nbsp;&nbsp; 12291&nbsp;&nbsp;&nbsp; 12202&nbsp;&nbsp;&nbsp; 12213<br=
>1024&nbsp;&nbsp;&nbsp; 11330&nbsp;&nbsp;&nbsp; 11334&nbsp;&nbsp;&nbsp; 113=
27&nbsp;&nbsp;&nbsp; 11303&nbsp;&nbsp;&nbsp; 11276&nbsp;&nbsp;&nbsp; 11283<=
br>2048&nbsp;&nbsp;&nbsp; 10875&nbsp;&nbsp;&nbsp; 10881&nbsp;&nbsp;&nbsp; 1=
0877&nbsp;&nbsp;&nbsp; 10873&nbsp;&nbsp;&nbsp; 10857&nbsp;&nbsp;&nbsp; 1086=
5<br>4096&nbsp;&nbsp;&nbsp; 10671&nbsp;&nbsp;&nbsp; 10670&nbsp;&nbsp;&nbsp;=
 9706&nbsp;&nbsp;&nbsp; 10673&nbsp;&nbsp;&nbsp; 9685&nbsp;&nbsp;&nbsp; 1064=
0<br>
8192&nbsp;&nbsp;&nbsp; 10572&nbsp;&nbsp;&nbsp; 10060&nbsp;&nbsp;&nbsp; 1057=
1&nbsp;&nbsp;&nbsp; 10573&nbsp;&nbsp;&nbsp; 10555&nbsp;&nbsp;&nbsp; 10064<b=
r>16384&nbsp;&nbsp;&nbsp; 10522&nbsp;&nbsp;&nbsp; 10523&nbsp;&nbsp;&nbsp; 1=
0523&nbsp;&nbsp;&nbsp; 10522&nbsp;&nbsp;&nbsp; 10522&nbsp;&nbsp;&nbsp; 1026=
3<br>32768&nbsp;&nbsp;&nbsp; 10494&nbsp;&nbsp;&nbsp; 10497&nbsp;&nbsp;&nbsp=
; 10495&nbsp;&nbsp;&nbsp; 10493&nbsp;&nbsp;&nbsp; 10497&nbsp;&nbsp;&nbsp; 1=
0497<br>65536&nbsp;&nbsp;&nbsp; 10484&nbsp;&nbsp;&nbsp; 10483&nbsp;&nbsp;&n=
bsp; 10419&nbsp;&nbsp;&nbsp; 10483&nbsp;&nbsp;&nbsp; 10485&nbsp;&nbsp;&nbsp=
; 10485<br>
131072&nbsp;&nbsp;&nbsp; 10419&nbsp;&nbsp;&nbsp; 10475&nbsp;&nbsp;&nbsp; 10=
477&nbsp;&nbsp;&nbsp; 10445&nbsp;&nbsp;&nbsp; 10445&nbsp;&nbsp;&nbsp; 10478=
<br>262144&nbsp;&nbsp;&nbsp; 10323&nbsp;&nbsp;&nbsp; 10241&nbsp;&nbsp;&nbsp=
; 10312&nbsp;&nbsp;&nbsp; 10226&nbsp;&nbsp;&nbsp; 10320&nbsp;&nbsp;&nbsp; 1=
0237<br>524288&nbsp;&nbsp;&nbsp; 10074&nbsp;&nbsp;&nbsp; 9966&nbsp;&nbsp;&n=
bsp; 9707&nbsp;&nbsp;&nbsp; 8567&nbsp;&nbsp;&nbsp; 8213&nbsp;&nbsp;&nbsp; 9=
046<br>1048576&nbsp;&nbsp;&nbsp; 7440&nbsp;&nbsp;&nbsp; 7973&nbsp;&nbsp;&nb=
sp; 5737&nbsp;&nbsp;&nbsp; 7101&nbsp;&nbsp;&nbsp; 7678&nbsp;&nbsp;&nbsp; 57=
43<br>
<br>Any idea for this higher value in NFS test .some this is different . Bu=
t I am not able to understand.<br><br><br><br><br><br>Thanks for your time<=
br>Mohan<br><br>

------=_Part_259374_19852034.1230880209611--



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux