Re: Ceph Performance of Micron 5210 SATA?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 If you are asking, maybe run this?

[global]
ioengine=libaio
invalidate=1
ramp_time=30
iodepth=1
runtime=180
time_based
direct=1
filename=/dev/sdf

[write-4k-seq]
stonewall
bs=4k
rw=write
#write_bw_log=sdx-4k-write-seq.results
#write_iops_log=sdx-4k-write-seq.results

[randwrite-4k-seq]
stonewall
bs=4k
rw=randwrite
#write_bw_log=sdx-4k-randwrite-seq.results
#write_iops_log=sdx-4k-randwrite-seq.results

[read-4k-seq]
stonewall
bs=4k
rw=read
#write_bw_log=sdx-4k-read-seq.results
#write_iops_log=sdx-4k-read-seq.results

[randread-4k-seq]
stonewall
bs=4k
rw=randread
#write_bw_log=sdx-4k-randread-seq.results
#write_iops_log=sdx-4k-randread-seq.results

[rw-4k-seq]
stonewall
bs=4k
rw=rw
#write_bw_log=sdx-4k-rw-seq.results
#write_iops_log=sdx-4k-rw-seq.results

[randrw-4k-seq]
stonewall
bs=4k
rw=randrw
#write_bw_log=sdx-4k-randrw-seq.results
#write_iops_log=sdx-4k-randrw-seq.results

[write-128k-seq]
stonewall
bs=128k
rw=write
#write_bw_log=sdx-128k-write-seq.results
#write_iops_log=sdx-128k-write-seq.results

[randwrite-128k-seq]
stonewall
bs=128k
rw=randwrite
#write_bw_log=sdx-128k-randwrite-seq.results
#write_iops_log=sdx-128k-randwrite-seq.results

[read-128k-seq]
stonewall
bs=128k
rw=read
#write_bw_log=sdx-128k-read-seq.results
#write_iops_log=sdx-128k-read-seq.results

[randread-128k-seq]
stonewall
bs=128k
rw=randread
#write_bw_log=sdx-128k-randread-seq.results
#write_iops_log=sdx-128k-randread-seq.results

[rw-128k-seq]
stonewall
bs=128k
rw=rw
#write_bw_log=sdx-128k-rw-seq.results
#write_iops_log=sdx-128k-rw-seq.results

[randrw-128k-seq]
stonewall
bs=128k
rw=randrw
#write_bw_log=sdx-128k-randrw-seq.results
#write_iops_log=sdx-128k-randrw-seq.results

[write-1024k-seq]
stonewall
bs=1024k
rw=write
#write_bw_log=sdx-1024k-write-seq.results
#write_iops_log=sdx-1024k-write-seq.results

[randwrite-1024k-seq]
stonewall
bs=1024k
rw=randwrite
#write_bw_log=sdx-1024k-randwrite-seq.results
#write_iops_log=sdx-1024k-randwrite-seq.results

[read-1024k-seq]
stonewall
bs=1024k
rw=read
#write_bw_log=sdx-1024k-read-seq.results
#write_iops_log=sdx-1024k-read-seq.results

[randread-1024k-seq]
stonewall
bs=1024k
rw=randread
#write_bw_log=sdx-1024k-randread-seq.results
#write_iops_log=sdx-1024k-randread-seq.results

[rw-1024k-seq]
stonewall
bs=1024k
rw=rw
#write_bw_log=sdx-1024k-rw-seq.results
#write_iops_log=sdx-1024k-rw-seq.results

[randrw-1024k-seq]
stonewall
bs=1024k
rw=randrw
#write_bw_log=sdx-1024k-randrw-seq.results
#write_iops_log=sdx-1024k-randrw-seq.results


-----Original Message-----
From: mj [mailto:lists@xxxxxxxxxxxxx] 
Sent: 06 March 2020 10:01
To: ceph-users@xxxxxxx
Subject:  Re: Ceph Performance of Micron 5210 SATA?

Last monday I performed a quick test with those two disks already, 
probably not that relevant, but posting it anyway:

I created a two-disk ceph 'cluster' on just the one local node, and ran 
the following:

> root@ceph:~# rados bench -p scbench 10 write --no-cleanup hints = 1 
> Maintaining 16 concurrent writes of 4194304 bytes to objects of size 
> 4194304 for up to 10 seconds or 0 objects Object prefix: 
benchmark_data_ceph_48906
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg 
lat(s)
>     0       0         0         0         0         0           -      
     0
>     1      16       107        91   363.961       364   0.0546517    
0.155121
>     2      16       206       190   379.948       396      0.1529    
0.159227
>     3      16       324       308   410.614       472   0.0972163    
0.151421
>     4      16       458       442   441.942       536   0.0484349    
0.141799
>     5      16       590       574   459.141       528   0.0445051    
0.136922
>     6      16       727       711   473.941       548    0.181066    
0.134468
>     7      16       856       840   479.941       516    0.187683    
0.133199
>     8      16       970       954   476.942       456    0.070753    
0.132642
>     9      16      1089      1073   476.831       476    0.193608    
0.133754
>    10      16      1214      1198   479.142       500   0.0999212    
0.132529
> Total time run:         10.097218
> Total writes made:      1215
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     481.321
> Stddev Bandwidth:       60.481
> Max bandwidth (MB/sec): 548
> Min bandwidth (MB/sec): 364
> Average IOPS:           120
> Stddev IOPS:            15
> Max IOPS:               137
> Min IOPS:               91
> Average Latency(s):     0.132889
> Stddev Latency(s):      0.0645579
> Max latency(s):         0.336118
> Min latency(s):         0.0117049

Do let me know what else you'd want me to do.

MJ
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux