Re: Jewel + kernel 4.4 Massive performance regression (-50%)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
Am running ubuntu 16 with kernel 4.4-0.31-generic and my speed are similar.

I did tests on ubuntu 14 and Ubuntu 16 and the speed is similar. I have around 80-90MB/s of OSD speeds in both operating systems

Only issue am observing now with ubuntu 16 is sometime osd fails on rebooting until i start them manually or adding starting commands in rc.local.

--

Lomayani

On Mon, Jul 25, 2016 at 6:45 PM, Yoann Moulin <yoann.moulin@xxxxxxx> wrote:
Hello,

(this is a repost, my previous message seems to be slipping under the radar)

Does anyone get a similar behaviour to the one described below ?

I found a big performance drop between kernel 3.13.0-88 (default kernel on
Ubuntu Trusty 14.04) or kernel 4.2.0 and kernel 4.4.0.24.14 (default kernel on
Ubuntu Xenial 16.04)

- ceph version is Jewel (10.2.2).
- All tests have been done under Ubuntu 14.04 on
- Each cluster has 5 nodes strictly identical.
- Each node has 10 OSDs.
- Journals are on the disk.

Kernel 4.4 has a drop of more than 50% compared to 4.2
Kernel 4.4 has a drop of 40% compared to 3.13

details below :

With the 3 kernel I have the same performance on disks :

Raw benchmark:
dd if=/dev/zero of=/dev/sdX bs=1M count=1024 oflag=direct    => average ~230MB/s
dd if=/dev/zero of=/dev/sdX bs=1G count=1 oflag=direct       => average ~220MB/s

Filesystem mounted benchmark:
dd if=/dev/zero of=/sdX1/test.img bs=1G count=1              => average ~205MB/s
dd if=/dev/zero of=/sdX1/test.img bs=1G count=1 oflag=direct => average ~214MB/s
dd if=/dev/zero of=/sdX1/test.img bs=1G count=1 oflag=sync   => average ~190MB/s

Ceph osd Benchmark:
Kernel 3.13.0-88-generic : ceph tell osd.ID bench => average  ~81MB/s
Kernel 4.2.0-38-generic  : ceph tell osd.ID bench => average ~109MB/s
Kernel 4.4.0-24-generic  : ceph tell osd.ID bench => average  ~50MB/s

I did new benchmarks then on 3 new fresh clusters.

- Each cluster has 3 nodes strictly identical.
- Each node has 10 OSDs.
- Journals are on the disk.

bench5 : Ubuntu 14.04 / Ceph Infernalis
bench6 : Ubuntu 14.04 / Ceph Jewel
bench7 : Ubuntu 16.04 / Ceph jewel

this is the average of 2 runs of "ceph tell osd.* bench" on each cluster (2 x 30
OSDs)

bench5 / 14.04 / Infernalis / kernel 3.13 :  54.35 MB/s
bench6 / 14.04 / Jewel      / kernel 3.13 :  86.47 MB/s

bench5 / 14.04 / Infernalis / kernel 4.2  :  63.38 MB/s
bench6 / 14.04 / Jewel      / kernel 4.2  : 107.75 MB/s
bench7 / 16.04 / Jewel      / kernel 4.2  : 101.54 MB/s

bench5 / 14.04 / Infernalis / kernel 4.4  :  53.61 MB/s
bench6 / 14.04 / Jewel      / kernel 4.4  :  65.82 MB/s
bench7 / 16.04 / Jewel      / kernel 4.4  :  61.57 MB/s

If needed, I have the raw output of "ceph tell osd.* bench"

Best regards

--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux