Re: performance decrease after continuous run

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have seen this and some of our big customers have also seen this. I was using 8TB HDDs and when running small tests using a fresh HDD setup, these tests resulted in very good performance. I then loaded the ceph cluster so each of the 8TB HDD used 4TB and reran the same tests. performance was cut in 1/2. This is using the default settings on how ceph creates the dirs and sub-directories on each OSD. You can flatten out this dir structure so the structure is more wide than deep and performance is improved. Check out the filestore_merge_threshold and filestore_split_multiple settings.
Rick
On Jul 20, 2016, at 3:19 PM, Kane Kim <kane.isturm@xxxxxxxxx> wrote:

Hello,

I was running cosbench for some time and noticed sharp consistent performance decrease at some point.

Image is here: http://take.ms/rorPw
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux