Hello, On Fri, 21 Apr 2017 11:41:01 +0000 Tobias Kropf - inett GmbH wrote: > Hi all, > > we have a running Ceph Cluster over 5 OSD nodes. Performance and Latency are good. Now we have two new supermicro OSD nodes with HBA. The osd 0-26 are in the old Servers und osd 27-55 in the new. Is this latency normal? The osd 27-55 are not in bucket und mapped no pools. > Firstly, you're giving us basically nothing to work with when it comes to details, some of which might allow us to make educated guesses. Like all the relevant versions (OS, Ceph, firmware/drivers), HW description (you mention HBA, so are the previous nodes not SM and have different controllers?), which disks/SSDs are we talking about? That being said, make a test pool on these new nodes and get it busy with fio or the likes. I've seen this before on a cluster here, where idle OSDs (in our case HDD ones behind a cache tier) have silly latency times that do not reflect reality, ie. when they are actually being used. Christian > osd fs_commit_latency(ms) fs_apply_latency(ms) > 0 0 0 > 1 0 0 > 2 0 0 > 3 0 0 > 4 0 0 > 5 0 0 > 6 0 0 > 7 0 0 > 8 0 0 > 9 0 0 > 10 0 0 > 11 0 0 > 12 0 0 > 13 0 0 > 14 0 0 > 15 0 0 > 16 0 0 > 17 0 0 > 18 0 0 > 19 0 0 > 20 0 0 > 21 0 1 > 22 0 0 > 23 0 1 > 24 0 0 > 25 0 0 > 26 0 0 > 27 32 39 > 28 0 8 > 29 42 49 > 30 0 12 > 31 52 60 > 32 0 15 > 33 0 0 > 34 18 108 > 35 10 13 > 36 19 101 > 37 14 99 > 38 17 126 > 39 14 16 > 40 19 64 > 41 12 24 > 42 28 121 > 43 16 25 > 44 18 221 > 45 11 21 > 46 35 134 > 47 7 12 > 48 42 138 > 49 11 17 > 50 40 131 > 51 15 22 > 52 39 137 > 53 14 23 > 54 36 231 > 55 12 16 > > Tobias > > > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com