Re: IOZone Performance is very Strange

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Anny,

As Jeff mentioned, you need to add the following option for iozone testing:
-e: include flush in the timing calculation to remedy the cache effect

and may add:
-n: set the minimum size for auto mode (e.g., -n 256M) to skip testing
of small file sizes

and drop -z if you want to omit testing small record sizes. (starting
from 64K instead of 4K)

--
Henry


2011/5/18 Jeff Wu <cpwu@xxxxxxxxxxxxx>:
>
> Hi AnnyRen,
>
> with the test results , Ârun a 1G data at your ceph cluster ,
> should be about 100M/sec.
> maybe,iozone test results didn't include iozone flush time.
>
> Could you list your hardware platform infos ?
> network:1G,4G,8G,FC ...?,cpu,memory:size ? ,disk:PATA,SATA,SAS,SSD ??
> and
> could you try other iozone commands ,for instance :
>
> 1)
> add "-e" param to include flush(fsync,fflush) in the timing
> calculations.
>
> /usr/bin/iozone -azcR -e -f /mnt/ceph/test0516.dbf Â\
> -g 1G -b /exceloutput0516.xls
>
> 2)run a large data which size is your host memory size*2:
>
> $./iozone -z -c -e -a -n 512M -g {memory_size}*2M -i 0 -i 1 -i 2 Â\
> -f /mnt/ceph/fio -Rb ./iozone.xls
>
> or
> 2)
> !/bin/sh
>
> for i in 32 64 128 256
> do
> ./iozone -r ${i}k -t 10 -s 4096M -i 0 -i 1 -i 2 Â \
> -F /mnt/ceph/F1 /mnt/ceph/F2 /mnt/ceph/F3 /mnt/ceph/F4 /mnt/ceph/F5 /mnt/ceph/F6 /mnt/ceph/F7 /mnt/ceph/F8 /mnt/ceph/F9 /mnt/ceph/F10
> done
>
>
>
>
> Jeff
>
>
>
>
>
>
> On Tue, 2011-05-17 at 15:34 +0800, AnnyRen wrote:
>> Hi, Jeff:
>>
>> I run "ceph osd tell osd_num bench" with 1 times per osd
>>
>> and use ceph -w to observe every osd performance,
>>
>> osd0: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 10.875844
>> sec at 96413 KB/sec
>> osd1: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 11.784985
>> sec at 88975 KB/sec
>> osd2: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 11.161067
>> sec at 93949 KB/sec
>> osd3: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 10.798796
>> sec at 97101 KB/sec
>> osd4: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 14.437141
>> sec at 72630 KB/sec
>> osd5: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 14.451444
>> sec at 72558 KB/sec
>> osd6: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 11.083872
>> sec at 94603 KB/sec
>> osd7: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 11.062728
>> sec at 94784 KB/sec
>> osd8: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 14.137312
>> sec at 74170 KB/sec
>> osd9: Â[INF] bench: wrote 1024 MB in blocks of 4096 KB in 13.489992
>> sec at 77729 KB/sec
>>
>>
>> and I run
>> root@MDS2:/mnt/ceph# rados bench 60 write -p data
>>
>> the result is
>>
>> Total time run: Â Â Â Â60.553247
>> Total writes made: Â Â 1689
>> Write size: Â Â Â Â Â Â4194304
>> Bandwidth (MB/sec): Â Â111.571
>>
>> Average Latency: Â Â Â 0.573209
>> Max latency: Â Â Â Â Â 2.25691
>> Min latency: Â Â Â Â Â 0.218494
>>
>>
>>
>> 2011/5/17 Jeff Wu <cpwu@xxxxxxxxxxxxx>:
>> > Hi AnnyRen
>> >
>> > Could you run the following commands and give us the test results?
>> >
>> > $ceph osd tell OSD-N bench  Â// OSD-N : osd number : 0,1,2 ....
>> > $ceph -w
>> >
>> > $rados bench 60 write -p data  Â// refer to "rados -h "
>> >
>> > Jeff
>> >
>> >
>> >
>> > On Tue, 2011-05-17 at 11:53 +0800, AnnyRen wrote:
>> >> I'm running iozone on EXT4 with Ceph v0.26.
>> >> But I got the weird result, most write performance exceed 1GB/s, even
>> >> up to 3GB/s
>> >> I think it's not normal to get the performance outpupt.
>> >>
>> >> Command line I used is: /usr/bin/iozone -azcR -f
>> >> /mnt/ceph/test0516.dbf -g 1G -b /exceloutput0516.xls
>> >> Attachment is the output file...
>> >>
>> >> and my environment is composed of 15 physical machines with
>> >>
>> >> 3 MON, 2MDS (1 active, 1 standby), 10 OSD (1 osd daemon (3T) /host)
>> >> EXT4 format
>> >> data replication size: 3
>> >>
>> >>
>> >>
>> >> "Writer report"
>> >> Â Â Â Â "4" Â"8" Â"16" Â"32" Â"64" Â"128" Â"256" Â"512" Â"1024"
>> >> "2048" Â"4096" Â"8192" Â"16384"
>> >> "64" Â 983980 Â1204796 Â1210227 Â1143223 Â1357066
>> >> "128" Â 1007629 Â1269781 Â1406136 Â1391557 Â1436229 Â1521718
>> >> "256" Â 1112909 Â1430119 Â1523457 Â1652404 Â1514860 Â1639786 Â1729594
>> >> "512" Â 1150351 Â1475116 Â1605228 Â1723770 Â1797349 Â1712772 Â1783912 Â1854787
>> >> "1024" Â 1213334 Â1471481 Â1679160 Â1828574 Â1888889 Â1899750 Â1885572
>> >> Â1865912 Â1875690
>> >> "2048" Â 1229274 Â1540849 Â1708146 Â1843410 Â1903457 Â1980705 Â1930406
>> >> Â1913634 Â1906837 Â1815744
>> >> "4096" Â 1213284 Â1528348 Â1674646 Â1762434 Â1872096 Â1882352 Â1881528
>> >> Â1903416 Â1897949 Â1835102 Â1731177
>> >> "8192" Â 204560 Â155186 Â572387 Â238548 Â186597 Â429036 Â187327
>> >> 157205 Â553771 Â416512 Â299810 Â405842
>> >> "16384" Â 699749 Â559255 Â687450 Â541030 Â828776 Â555296 Â742561
>> >> 525483 Â604910 Â452423 Â564557 Â670539 Â970616
>> >> "32768" Â 532414 Â829610 Â812215 Â879441 Â863159 Â864794 Â865938
>> >> 804951 Â916352 Â879582 Â608132 Â860732 Â1239475
>> >> "65536" Â 994824 Â1096543 Â1095791 Â1317968 Â1280277 Â1390267 Â1259868
>> >> Â1205214 Â1339111 Â1346927 Â1267888 Â863234 Â1190221
>> >> "131072" Â 1063429 Â1165115 Â1102650 Â1554828 Â1182128 Â1185731
>> >> 1190752 Â1195792 Â1277441 Â1211063 Â1237567 Â1226999 Â1336961
>> >> "262144" Â 1280619 Â1368554 Â1497837 Â1633397 Â1598255 Â1609212
>> >> 1607504 Â1665019 Â1590515 Â1548307 Â1591258 Â1505267 Â1625679
>> >> "524288" Â 1519583 Â1767928 Â1738523 Â1883151 Â2011216 Â1993877
>> >> 2023543 Â1867440 Â2106124 Â2055064 Â1906668 Â1778645 Â1838988
>> >> "1048576" Â 1580851 Â1887530 Â2044131 Â2166133 Â2236379 Â2283578
>> >> 2257454 Â2296612 Â2271066 Â2101274 Â1905829 Â1605923 Â2158238
>> >>
>> >>
>> >>
>> >> "Reader report"
>> >> Â Â Â Â "4" Â"8" Â"16" Â"32" Â"64" Â"128" Â"256" Â"512" Â"1024"
>> >> "2048" Â"4096" Â"8192" Â"16384"
>> >> "64" Â 1933893 Â2801873 Â3057153 Â3363612 Â3958892
>> >> "128" Â 2286447 Â3053774 Â2727923 Â3468030 Â4104338 Â4557257
>> >> "256" Â 2903529 Â3236056 Â3245838 Â3705040 Â3654598 Â4496299 Â5117791
>> >> "512" Â 2906696 Â3437042 Â3628697 Â3431550 Â4871723 Â4296637 Â6246213 Â6395018
>> >> "1024" Â 3229770 Â3483896 Â4609294 Â3791442 Â4614246 Â5536137 Â4550690
>> >> Â5048117 Â4966395
>> >> "2048" Â 3554251 Â4310472 Â3885431 Â4096676 Â6401772 Â4842658 Â5080379
>> >> Â5184636 Â5596757 Â5735012
>> >> "4096" Â 3416292 Â4691638 Â4321103 Â5728903 Â5475122 Â5171846 Â4819300
>> >> Â5258919 Â6408472 Â5044289 Â4079948
>> >> "8192" Â 3233004 Â4615263 Â4536055 Â5618186 Â5414558 Â5025700 Â5553712
>> >> Â4926264 Â5634770 Â5281396 Â4659702 Â3652258
>> >> "16384" Â 3141058 Â3704193 Â4567654 Â4395850 Â4568869 Â5387732
>> >> 4436432 Â5808029 Â5578420 Â4675810 Â3913007 Â3911225 Â3961277
>> >> "32768" Â 3704273 Â4598957 Â4088278 Â5133719 Â5896692 Â5537024
>> >> 5234412 Â5398271 Â4942992 Â4118662 Â3729099 Â3511757 Â3481511
>> >> "65536" Â 4131091 Â4210184 Â5341188 Â4647619 Â6077765 Â5852474
>> >> 5379762 Â5259330 Â5488249 Â5246682 Â4342682 Â3549202 Â3286487
>> >> "131072" Â 3582876 Â5251082 Â5332216 Â5269908 Â5303512 Â5574440
>> >> 5635064 Â5796372 Â5406363 Â4958839 Â4435918 Â3673443 Â3647874
>> >> "262144" Â 3659283 Â4551414 Â5746231 Â5433824 Â5876196 Â6011650
>> >> 5552000 Â5629260 Â5298830 Â4982226 Â4628902 Â4065823 Â3421924
>> >> "524288" Â 3905973 Â5488778 Â5219963 Â6047356 Â5916811 Â6180455
>> >> 5495733 Â5925628 Â5637512 Â5537123 Â3517132 Â3550861 Â3047013
>> >> "1048576" Â 3855595 Â5634571 Â5410298 Â6001809 Â6464713 Â6299610
>> >> 5894249 Â5516031 Â5800161 Â5209203 Â4295840 Â3724983 Â3641623
>> >
>> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at Âhttp://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux