On Sat, Nov 9, 2013 at 10:13 AM, Samuel Just <sam.just@xxxxxxxxxxx> wrote: > Currently, the messenger delivers messages to the Dispatcher > implementation from a single thread (See src/msg/DispatchQueue.h/cc). > My take away from the performance work so far is that we probably need > client IO related messages to bypass the DispatchQueue bottleneck by > allowing the thread reading the message to call directly into the > Dispatcher. wip-queueing is a very preliminary branch implementing > this behavior for OSD ops and subops (note, this branch does not work > yet!). The main change is to add to the Dispatcher interface > ms_can_fast_dispatch and ms_fast_dispatch. This allows the dispatcher > implementation to designate some messages as safe to dispatch in > parallel without queueing. With this, the Messenger checks if it can dispatch earlier than normal (in the SimpleMessenger, within the Pipe threads) and does so if it's allowed. (I suddenly realize that we probably need to make that check required, not something the Messenger can choose to do or ignore, which is kind of a bummer.) There are two concerns with this: 1) if the Dispatcher lets through some messages but not others, the normal ordering constraints will be violated. It's the Dispatcher's responsibility to make sure that's not a problem (easy enough for the OSD; it can just sort by type). 2) (I didn't think about this one on Thursday, sorry Sam) If the fast dispatch loop takes longer than reading a message does, the Pipe might get backed up a little more than it normally would when it's just placing a message in the DispatchQueue. I don't think this should be a big problem since the OS is handling the sockets for us, but it might become a concern when we switch to many sockets per thread. The OSD should be okay though since the messages are just getting placed into PG queues. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html