Re: op_w_latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the updated command – much cleaner!

 

The OSD nodes have a single 6core X5650 @ 2.67GHz, 72GB GB and around 8x10TB HDD OSD/ 4 x 2TB SSD OSD. Cpu usage is around 20% and the ram has 22GB available.

The 3 MON nodes are the same but with no OSDs

The cluster has around 150 drives and only doing 500-1000 ops overall.

The network is dual 10gbit using lacp. Vlan for private ceph traffic and untagged for public

 

Glen

From: Konstantin Shalygin <k0ste@xxxxxxxx>
Sent: Wednesday, 3 April 2019 11:39 AM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] op_w_latency

 

Hello Ceph Users,
 
I am finding that the write latency across my ceph clusters isn't great and I wanted to see what other people are getting for op_w_latency. Generally I am getting 70-110ms latency.
 
I am using: ceph --admin-daemon /var/run/ceph/ceph-osd.102.asok perf dump | grep -A3 '\"op_w_latency' | grep 'avgtime'

Better like this:

ceph daemon osd.102 perf dump | jq '.osd.op_w_latency.avgtime'

 
Ram, CPU and network don't seem to be the bottleneck. The drives are behind a dell H810p raid card with a 1GB writeback cache and battery. I have tried with LSI JBOD cards and haven't found it faster ( as you would expect with write cache ). The disks through iostat -xyz 1 show 10-30% usage with general service + write latency around 3-4ms. Queue depth is normally less than one. RocksDB write latency is around 0.6ms, read 1-2ms. Usage is RBD backend for Cloudstack.
 

What is your hardware? Your CPU, RAM, Eth?

 

 

k

This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux