Re: Cephfs slow 6MB/s and rados bench sort of ok.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
I was not trying to compare the test results I know they are different. 
I am showing that reading is slow on cephfs (I am doing an rsync to 
cephfs and I assumed that rsync is just reading the file in a similar 
way)

And cluster is sort of in same ok state.

Meanwhile I did similar test with ceph-fuse, and getting what I am used 
to.


[@c04 folder]# dd if=file1 of=/dev/null status=progress
12305+1 records in
12305+1 records out
6300206 bytes (6.3 MB) copied, 0.100237 s, 62.9 MB/s
[@c04 folder]# dd if=file2 of=/dev/null status=progress
3116352000 bytes (3.1 GB) copied, 29.143809 s, 107 MB/ss
6209378+1 records in
6209378+1 records out
3179201945 bytes (3.2 GB) copied, 29.7547 s, 107 MB/s


-----Original Message-----
From: Igor Fedotov [mailto:ifedotov@xxxxxxx] 
Sent: dinsdag 28 augustus 2018 11:59
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Cephfs slow 6MB/s and rados bench sort of ok.

Hi Marc,


In general dd isn't the best choice for benchmarking.

In you case there are at least 3 differences from rados bench :

1)If I haven't missed something then you're comparing reads vs. writes

2) Block Size is difference ( 512 bytes for dd vs . 4M for rados bench)

3) Just a single dd instance vs. 16 concurrent threads for rados bench.


Thanks,

Igor



On 8/28/2018 12:50 PM, Marc Roos wrote:
> I have a idle test cluster (centos7.5, Linux c04 
> 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
>
> I tested reading a few files on this cephfs mount and get very low 
> results compared to the rados bench. What could be the issue here?
>
> [@client folder]# dd if=5GB.img of=/dev/null status=progress 954585600 

> bytes (955 MB) copied, 157.455633 s, 6.1 MB/s
>
>
>
> I included this is rados bench that shows sort of that cluster 
> performance is sort of as expected.
> [@c01 ~]# rados bench -p fs_data 10 write hints = 1 Maintaining 16 
> concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects Object prefix: 
> benchmark_data_c01_453883
>    sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  
avg
> lat(s)
>      0       0         0         0         0         0           -
>     0
>      1      16        58        42   167.967       168    0.252071
> 0.323443
>      2      16       106        90   179.967       192    0.583383
> 0.324867
>      3      16       139       123   163.973       132    0.170865
> 0.325976
>      4      16       183       167   166.975       176    0.413676
> 0.361364
>      5      16       224       208   166.374       164    0.394369
> 0.365956
>      6      16       254       238   158.642       120    0.698396
> 0.382729
>      7      16       278       262   149.692        96    0.120742
> 0.397625
>      8      16       317       301   150.478       156    0.786822
> 0.411193
>      9      16       360       344   152.867       172    0.601956
> 0.411577
>     10      16       403       387   154.778       172     0.20342
> 0.404114
> Total time run:         10.353683
> Total writes made:      404
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     156.08
> Stddev Bandwidth:       29.5778
> Max bandwidth (MB/sec): 192
> Min bandwidth (MB/sec): 96
> Average IOPS:           39
> Stddev IOPS:            7
> Max IOPS:               48
> Min IOPS:               24
> Average Latency(s):     0.409676
> Stddev Latency(s):      0.243565
> Max latency(s):         1.25028
> Min latency(s):         0.0830112
> Cleaning up (deleting benchmark objects) Removed 404 objects Clean up 
> completed and total clean up time :0.867185
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux