LTTng tracing: hitting the message throttle

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

while running tests to collect data using LTTng, I was hitting
the message throttle in ceph-osd, which is controlled by configuration
option osd_map_message_max, default 100.

If I am not mistaken, then this throttle hits when more than
100 messages received via SimpleMessenger are not yet destructed,
i.e. are still under processing within the affected ceph-osd daemon.

I was running a fio test, random writes of 4 KByte with 16 parallel
IO's. The storage cluster consists of 12 osds on 3 storage nodes,
replication level 3.

I wonder why I hit that message throttle with my load profile:
the 16 parallel IO's should at most generate 48 data messages and also
48 acknowledges across the cluster - so I would not expect anything
even close to a 100 message limit on a single osd in my cluster.

Did anybody else experience such issues?

Are there other throttle values which I should look at?


Regards

Andreas Bluemle


-- 
Andreas Bluemle                     mailto:Andreas.Bluemle@xxxxxxxxxxx
ITXperts GmbH                       http://www.itxperts.de
Balanstrasse 73, Geb. 08            Phone: (+49) 89 89044917
D-81541 Muenchen (Germany)          Fax:   (+49) 89 89044910

Company details: http://www.itxperts.de/imprint.htm
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux