Re: messenger performance regression

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 24/10/2018 21:54, Gregory Farnum wrote:
> Do we understand why debug mode got so much slower? Is there something
> we can do to improve it?

I believe the reason for the slowdown is due to the increase of number
of functions that are used in the new implementation. While in the
previous implementation, the state machine was implemented with just two
big functions (with a switch/case block in each), the new implementation
uses one function per protocol state.
I'm not familiar with what the compiler generates in Debug mode, but I
imagine that now there are much more debug symbols to track, and less
optimizations that the compiler can preform without confusing the
debugger tools.

I currently don't see a way to improve the performance in Debug mode.
One thing we can do though is to also check the performance when
compiling in RelWithDebugInfo mode. If it preforms similar to the
Release mode, at least we still have debug symbols to help in
identifying some problems.

> 
> We are for instance seeing new issues with the messenger in our
> testing, apparently because the reduced speed opens up race conditions
> much wider. In this case that's good for us, but it could easily go
> the other way as well and I'm concerned about not finding new issues
> in our testing if the difference is so substantial compared to what
> will be deployed by users.

Maybe we can build packages for the binaries compiled with the two modes
(Debug and Release) and be able to specify which one to use in each test
run.

> -Greg
> On Wed, Oct 24, 2018 at 3:18 AM Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>>
>> Only ceph complied in debug mode has the regression. Ceph complied in
>> release mode has no regression. Sorry for the noisy.
>>
>> Yan, Zheng
>>
>>
>>
>> On Wed, Oct 24, 2018 at 1:46 PM Yan, Zheng <ukernel@xxxxxxxxx> wrote:
>>>
>>> Hi,
>>>
>>> Yesterday I checked how fast ceph-mds can process requests (a client
>>> keeps sending getattr request of root inode). Requests rate I got is
>>> only about half of same test I did a few weeks ago. Perf profile of
>>> ceph-mds shows that messenger functions used more CPU time compared to
>>> mimic code. Performance result and perf profiles are at
>>> http://tracker.ceph.com/issues/36561.
>>>
>>> Regards
>>> Yan, Zheng
> 

-- 
Ricardo Dias
Senior Software Engineer - Storage Team
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284
(AG Nürnberg)

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux