Re: Cephfs slow 6MB/s and rados bench sort of ok.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
Thanks!!!

https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg46212.html
echo 8192 >/sys/devices/virtual/bdi/ceph-1/read_ahead_kb



-----Original Message-----
From: Yan, Zheng [mailto:ukernel@xxxxxxxxx] 
Sent: dinsdag 28 augustus 2018 15:44
To: Marc Roos
Cc: ceph-users
Subject: Re:  Cephfs slow 6MB/s and rados bench sort of ok.

It's a bug. search thread "Poor CentOS 7.5 client performance" in 
ceph-users.
On Tue, Aug 28, 2018 at 2:50 AM Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> 
wrote:
>
>
> I have a idle test cluster (centos7.5, Linux c04 
> 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
>
> I tested reading a few files on this cephfs mount and get very low 
> results compared to the rados bench. What could be the issue here?
>
> [@client folder]# dd if=5GB.img of=/dev/null status=progress 954585600 

> bytes (955 MB) copied, 157.455633 s, 6.1 MB/s
>
>
>
> I included this is rados bench that shows sort of that cluster 
> performance is sort of as expected.
> [@c01 ~]# rados bench -p fs_data 10 write hints = 1 Maintaining 16 
> concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects Object prefix: 
> benchmark_data_c01_453883
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
>     0       0         0         0         0         0           -
>    0
>     1      16        58        42   167.967       168    0.252071
> 0.323443
>     2      16       106        90   179.967       192    0.583383
> 0.324867
>     3      16       139       123   163.973       132    0.170865
> 0.325976
>     4      16       183       167   166.975       176    0.413676
> 0.361364
>     5      16       224       208   166.374       164    0.394369
> 0.365956
>     6      16       254       238   158.642       120    0.698396
> 0.382729
>     7      16       278       262   149.692        96    0.120742
> 0.397625
>     8      16       317       301   150.478       156    0.786822
> 0.411193
>     9      16       360       344   152.867       172    0.601956
> 0.411577
>    10      16       403       387   154.778       172     0.20342
> 0.404114
> Total time run:         10.353683
> Total writes made:      404
> Write size:             4194304
> Object size:            4194304
> Bandwidth (MB/sec):     156.08
> Stddev Bandwidth:       29.5778
> Max bandwidth (MB/sec): 192
> Min bandwidth (MB/sec): 96
> Average IOPS:           39
> Stddev IOPS:            7
> Max IOPS:               48
> Min IOPS:               24
> Average Latency(s):     0.409676
> Stddev Latency(s):      0.243565
> Max latency(s):         1.25028
> Min latency(s):         0.0830112
> Cleaning up (deleting benchmark objects) Removed 404 objects Clean up 
> completed and total clean up time :0.867185
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux