Re: dd testing from within the VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ken,

dd is ok, but you should consider the fact that dd is a squence of writing.

So if you have random writes in your later productive usage, then this
test is basically only good to meassure the maximum squential write
performance in idle state.

And 250 MB for 200 HDD's is quiet evil bad as a performance for a
sequential write.

Sequential write of a 7200 RPM SATA HDD should be around 70-100 MB,
maybe more.

So if you have 200 of them, idle, and writing a sequence, and resulting
in 250 MB/s. That does not look good to me.

So eighter your network is not good, or your settings are not good. Or
you have too high replica number or something like that.

At least for me, 200x HDDs and each HDD deliver 1,2 MB/s writing speed
performance.

I assume that your 4 GB won't be spread over all 200 HDDs. But still,
the result does not look like good performance.

FIO is a nice test with different settings.

---

The effect of conv=fdatasync will be only as big, as the RAM memory of
your test client will be.


-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 19.05.2016 um 04:40 schrieb Ken Peng:
> Hi,
> 
> Our VM has been using ceph as block storage for both system and data
> patition.
> 
> This is what dd shows,
> 
> # dd if=/dev/zero of=test.file bs=4k count=1024k
> 1048576+0 records in
> 1048576+0 records out
> 4294967296 bytes (4.3 GB) copied, 16.7969 s, 256 MB/s
> 
> When dd again with fdatasync argument,the result is similar.
> 
> # dd if=/dev/zero of=test.file bs=4k count=1024k conv=fdatasync
> 1048576+0 records in
> 1048576+0 records out
> 4294967296 bytes (4.3 GB) copied, 17.6878 s, 243 MB/s
> 
> 
> My questions include,
> 
> 1. for a cluster which has more than 200 disks as OSD storage (SATA
> only), both the cluster and data network are 10Gbps, does the
> performance from within the VM behave well as the results above?
> 
> 2. is "dd" suitable for testing a block storage within the VM?
> 
> 3. why "fdatasync" influences nothing on the testing?
> 
> Thank you.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux