Hi all, Recently, I did some basic work on new message implementation based on event(https://github.com/yuyuyu101/ceph/tree/msg-event). The basic idea is that we use a Processor thread for each Messenger to monitor all sockets and dispatch fd to threadpool. The event mechanism can be epoll, kqueue, poll or select. The thread in threadpool will read/write with this socket and dispatch message later. Now the branch has passed basic tests and before make it more stable and pass more QA suites. I want to do some benchmark tests compared to pipe implementation with large-scale cluster. I would like to use at least 100 OSDs(SSD) and hundreds of clients to test it. And now the benchmark for only one OSD, the client can get the same latency with pipe implementation and the latency stdev will be smaller. The background for this implementation is that pipe implementation consumes too much overhead on context switch and thread resource. In our env, several ceph-osd is running on compute node which also runs KVM process. Do you have any ideas about this, or some serious concerns compared to pipe. -- Best Regards, Wheat -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html