ceph osd perf question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

Could someone explain what's the new perf stats show and if the numbers are reasonable on my cluster?

I am concerned about the high fs_commit_latency, which seems to be above 150ms for all osds. I've tried to find the documentation on what this command actually shows, but couldn't find anything.

I am using 3TB sas drives with 4 osd journals on each ssd. Are the numbers below reasonable for a fairly idle ceph cluster (osd utilisation below 10% on average)?

# ceph osd perf
osdid fs_commit_latency(ms) fs_apply_latency(ms)
    0                   192                    4
    1                   265                    4
    2                   116                    1
    3                   125                    2
    4                   166                    1
    5                   209                    3
    6                   184                    6
    7                   142                    2
    8                   209                    1
    9                   166                    1
   10                   216                    1
   11                   308                    3
   12                   150                    2
   13                   125                    1
   14                   175                    2
   15                   142                    2
   16                   150                    4


when the cluster get's a bit busy (osd utilisation below 50% on average) I see:

# ceph osd perf
osdid fs_commit_latency(ms) fs_apply_latency(ms)
    0                   551                   11
    1                   284                   25
    2                   517                   41
    3                   492                   14
    4                   625                   13
    5                   309                   26
    6                   650                    9
    7                   517                   21
    8                   634                   25
    9                   784                   32
   10                   392                    7
   11                   501                    8
   12                   602                   12
   13                   467                   14
   14                   476                   36
   15                   451                   11
   16                   383                   21


Thanks

Andrei
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux