Re: Jewel + kernel 4.4 Massive performance regression (-50%)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mark,

> FWIW, on CentOS7 I actually saw a performance increase when upgrading from the
> stock 3.10 kernel to 4.4.5 with Intel P3700 NVMe devices.  I was encountering
> some kind of strange concurrency/locking issues at the driver level that 4.4.5
> resolved.  I think your best bet is to try different intermediate kernels, track
> it down as much as you can and then look through the kernel changelog.

The point here is I have only installed kernel from linux-image-virtual-lts
package, I expect for my future environment to stay on lts kernel package
maintained by security team.

anyway, I'm still in test, I can test kernels to try to find from which one the
regression start.

> Sorry I can't be of more help!

no problems :)

--
Yoann

> On 07/25/2016 10:45 AM, Yoann Moulin wrote:
>> Hello,
>>
>> (this is a repost, my previous message seems to be slipping under the radar)
>>
>> Does anyone get a similar behaviour to the one described below ?
>>
>> I found a big performance drop between kernel 3.13.0-88 (default kernel on
>> Ubuntu Trusty 14.04) or kernel 4.2.0 and kernel 4.4.0.24.14 (default kernel on
>> Ubuntu Xenial 16.04)
>>
>> - ceph version is Jewel (10.2.2).
>> - All tests have been done under Ubuntu 14.04 on
>> - Each cluster has 5 nodes strictly identical.
>> - Each node has 10 OSDs.
>> - Journals are on the disk.
>>
>> Kernel 4.4 has a drop of more than 50% compared to 4.2
>> Kernel 4.4 has a drop of 40% compared to 3.13
>>
>> details below :
>>
>> With the 3 kernel I have the same performance on disks :
>>
>> Raw benchmark:
>> dd if=/dev/zero of=/dev/sdX bs=1M count=1024 oflag=direct    => average ~230MB/s
>> dd if=/dev/zero of=/dev/sdX bs=1G count=1 oflag=direct       => average ~220MB/s
>>
>> Filesystem mounted benchmark:
>> dd if=/dev/zero of=/sdX1/test.img bs=1G count=1              => average ~205MB/s
>> dd if=/dev/zero of=/sdX1/test.img bs=1G count=1 oflag=direct => average ~214MB/s
>> dd if=/dev/zero of=/sdX1/test.img bs=1G count=1 oflag=sync   => average ~190MB/s
>>
>> Ceph osd Benchmark:
>> Kernel 3.13.0-88-generic : ceph tell osd.ID bench => average  ~81MB/s
>> Kernel 4.2.0-38-generic  : ceph tell osd.ID bench => average ~109MB/s
>> Kernel 4.4.0-24-generic  : ceph tell osd.ID bench => average  ~50MB/s
>>
>> I did new benchmarks then on 3 new fresh clusters.
>>
>> - Each cluster has 3 nodes strictly identical.
>> - Each node has 10 OSDs.
>> - Journals are on the disk.
>>
>> bench5 : Ubuntu 14.04 / Ceph Infernalis
>> bench6 : Ubuntu 14.04 / Ceph Jewel
>> bench7 : Ubuntu 16.04 / Ceph jewel
>>
>> this is the average of 2 runs of "ceph tell osd.* bench" on each cluster (2 x 30
>> OSDs)
>>
>> bench5 / 14.04 / Infernalis / kernel 3.13 :  54.35 MB/s
>> bench6 / 14.04 / Jewel      / kernel 3.13 :  86.47 MB/s
>>
>> bench5 / 14.04 / Infernalis / kernel 4.2  :  63.38 MB/s
>> bench6 / 14.04 / Jewel      / kernel 4.2  : 107.75 MB/s
>> bench7 / 16.04 / Jewel      / kernel 4.2  : 101.54 MB/s
>>
>> bench5 / 14.04 / Infernalis / kernel 4.4  :  53.61 MB/s
>> bench6 / 14.04 / Jewel      / kernel 4.4  :  65.82 MB/s
>> bench7 / 16.04 / Jewel      / kernel 4.4  :  61.57 MB/s
>>
>> If needed, I have the raw output of "ceph tell osd.* bench"
>>
>> Best regards
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux