On Fri, Feb 3, 2017 at 10:07 AM, Sage Weil <sage@xxxxxxxxxxxx> wrote: > On Fri, 3 Feb 2017, sheng qiu wrote: >> ---------- Forwarded message ---------- >> From: sheng qiu <herbert1984106@xxxxxxxxx> >> Date: Fri, Feb 3, 2017 at 7:45 AM >> Subject: Re: question about snapset >> To: Sage Weil <sage@xxxxxxxxxxxx> >> Cc: ceph-devel <ceph-devel@xxxxxxxxxxxxxxx> >> >> >> Hi Sage, >> >> Thanks a lot for your reply. It's very helpful. >> We are trying to avoid the query for snapset object for new object >> write, we think it may save some latency. Currently when we measure >> the op prepare latency, it's around 300 us, which is quite high. Is >> this normal? In our test, we configure 64 pgs per osd and use five >> shards with two workers per shard. The test machine is a four socket >> with 40 cores and plenty of memory. >> We are thinking how to reduce this latency, do you have any suggestions? > > There are several sources, and this is an active area of > investigation and optimization. I'm not sure that the snapset > specifically is probably a big part of the problem.. it's like the overall > work involved with get_object_context(), which will fetch the > attributes from the object. The snapset will be a small part of this. > > I suggest joining the weekly performance call if you can. Or we can > discuss some of the specific efforts on the list. The main efforts here > are > > - simplifying ms_fast_dispatch so that incoming messages get queued more > quickly Look at below call stack. Could you share more detail which parts to simplify? (gdb) bt #0 PG::queue_op () #1 OSD::enqueue_op () #2 OSD::handle_op () #3 OSD::dispatch_op_fast () #4 OSD::dispatch_session_waiting () #5 OSD::ms_fast_dispatch () #6 Messenger::ms_fast_dispatch () #7 DispatchQueue::fast_dispatch () #8 AsyncConnection::process () #9 EventCenter::process_events () #10 NetworkStack::<lambda()>::operator() #12 start_thread () #13 clone () -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html