Re: apply/commit latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



First, the cluster is newly built, so it's literally sitting idle and doing nothing.

Second, this new cluster has exactly the same hardware as my other clusters, same kernel, and same journal device setup, OSD layout, etc. The only difference is ceph version, 0.80.7 vs. hammer.

-Simon

On Wednesday, June 3, 2015, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
On Wed, Jun 3, 2015 at 5:19 AM, Xu (Simon) Chen <xchenum@xxxxxxxxx> wrote:
> Hi folks,
>
> I've always been confused about the apply/commit latency numbers in "ceph
> osd perf" output. I only know for sure that when they get too high,
> performance is bad.
>
> My deployments have seen many different versions of ceph. Pre 0.80.7, I've
> seen those numbers being pretty high. After upgrading to 0.80.7, all of a
> sudden, commit latency of all OSDs drop to 0-1ms, and apply latency remains
> pretty low most of the time.
>
> Now I'm trying hammer in a new cluster, and even when the cluster is doing
> nothing, I see commit latency being as high as 20ms, and apply latency being
> 200+ms, which seems a bit off to me.
>
> Any ideas how these numbers changed over versions?

Commit latency is how long it takes for an operation to be applied to
disk — generally speaking, how long it takes the journal to write an
entry. Apply latency is how long it takes to get applied to the
backing filesystem (which can be throttled by various things to
prevent us getting arbitrarily large amounts of dirty data).

The speeds we can get on these haven't changed much, although they've
generally gotten a little faster for Hammer. If you've seen wide
variances I think it has more to do with the setup of each cluster or
the state of the system at the time you ran the test than anything
inherent. :)

(For reference, the numbers I expect to see in a lightly-loaded SSD
cluster are ~2ms commit times for writes, or just a bit less. Anything
over 10 is definitely wrong, although that's close to correct for an
SSD-journaled hard drive cluster — probably more like 5-7.)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux