Re: Rados performance inconsistencies, lower than expected performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 06, 2018 at 05:15:26PM +0200, Marc Roos wrote:
> 
> It is idle, testing still, running a backup's at night on it.
> How do you fill up the cluster so you can test between empty and full? 
> Do you have a "ceph df" from empty and full? 
> 
> I have done another test disabling new scrubs on the rbd.ssd pool (but 
> still 3 on hdd) with:
> ceph tell osd.* injectargs --osd_max_backfills=0
> Again getting slower towards the end.
> Bandwidth (MB/sec):     395.749
> Average Latency(s):     0.161713
In the results you both had, the latency is twice as high as in our
tests [1]. That can already make quiet some difference. Depending on the
actual hardware used, there may or may not be the possibility for good
optimisation.

As a start, you could test the disks with fio, as shown in our benchmark
paper, to get some results for comparison. The forum thread [1] has
some benchmarks from other users for comparison.

[1] https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2018-02.41761/

--
Cheers,
Alwin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux