Re: kernel 3.18 io bottlenecks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ilya,

Am 25.06.2015 um 14:07 schrieb Ilya Dryomov:
On Wed, Jun 24, 2015 at 10:29 PM, Stefan Priebe <s.priebe@xxxxxxxxxxxx> wrote:

Am 24.06.2015 um 19:53 schrieb Ilya Dryomov:

On Wed, Jun 24, 2015 at 8:38 PM, Stefan Priebe <s.priebe@xxxxxxxxxxxx>
wrote:


Am 24.06.2015 um 16:55 schrieb Nick Fisk:


That kernel probably has the bug where tcp_nodelay is not enabled. That
is fixed in Kernel 4.0+, however also in 4.0 blk-mq was introduced which
brings two other limitations:-



blk-mq is terrible slow. That's correct.


Is that a general sentiment or your experience with rbd?  If the
latter, can you describe your workload and provide some before and
after blk-mq numbers?  We'd be very interested in identifying and
fixing any performance regressions you might have on blk-mq rbd.


oh i'm sorry. I accidently compiled blk-mq into the kernel when 3.18.1 came
out and i was wondering why my I/O waits on my ceph osds where doubled or
even tripled. After reverting back to cfq everything was fine again. I
didn't digged deeper into it as i thought blk-mq is experimental in 3.18.

That doesn't make sense - rbd was switched to blk-mq in 4.0.  Or did
you try to apply the patch from the mailing list to 3.18?

I'm talking about the ceph-osd process / side not about rbd client side.

If you're willing to assist i can give it a try - but need the patches you
mention first (git commit ids?).

No commit ids as the patches are not upstream yet.  I have everything
gathered in testing+blk-mq-plug branch of ceph-client.git:

https://github.com/ceph/ceph-client/tree/testing%2Bblk-mq-plug

A deb (ubuntu, debian, etc):

http://gitbuilder.ceph.com/kernel-deb-precise-x86_64-basic/ref/testing_blk-mq-plug/linux-image.deb

An rpm (fedora, centos, rhel):

http://gitbuilder.ceph.com/kernel-rpm-centos7-x86_64-basic/ref/testing_blk-mq-plug/kernel.x86_64.rpm

These are built with slightly stripped down distro configs so it should
boot most boxes.

Thanks,

                 Ilya

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux