On Sat, Jun 27, 2015 at 6:20 PM, Stefan Priebe <s.priebe@xxxxxxxxxxxx> wrote: > Dear Ilya, > > Am 25.06.2015 um 14:07 schrieb Ilya Dryomov: >> >> On Wed, Jun 24, 2015 at 10:29 PM, Stefan Priebe <s.priebe@xxxxxxxxxxxx> >> wrote: >>> >>> >>> Am 24.06.2015 um 19:53 schrieb Ilya Dryomov: >>>> >>>> >>>> On Wed, Jun 24, 2015 at 8:38 PM, Stefan Priebe <s.priebe@xxxxxxxxxxxx> >>>> wrote: >>>>> >>>>> >>>>> >>>>> Am 24.06.2015 um 16:55 schrieb Nick Fisk: >>>>>> >>>>>> >>>>>> >>>>>> That kernel probably has the bug where tcp_nodelay is not enabled. >>>>>> That >>>>>> is fixed in Kernel 4.0+, however also in 4.0 blk-mq was introduced >>>>>> which >>>>>> brings two other limitations:- >>>>> >>>>> >>>>> >>>>> >>>>> blk-mq is terrible slow. That's correct. >>>> >>>> >>>> >>>> Is that a general sentiment or your experience with rbd? If the >>>> latter, can you describe your workload and provide some before and >>>> after blk-mq numbers? We'd be very interested in identifying and >>>> fixing any performance regressions you might have on blk-mq rbd. >>> >>> >>> >>> oh i'm sorry. I accidently compiled blk-mq into the kernel when 3.18.1 >>> came >>> out and i was wondering why my I/O waits on my ceph osds where doubled or >>> even tripled. After reverting back to cfq everything was fine again. I >>> didn't digged deeper into it as i thought blk-mq is experimental in 3.18. >> >> >> That doesn't make sense - rbd was switched to blk-mq in 4.0. Or did >> you try to apply the patch from the mailing list to 3.18? > > > I'm talking about the ceph-osd process / side not about rbd client side. Ah, sorry - Nick was clearly talking about the kernel client and I replied to his mail. The kernel you run your OSDs on shouldn't matter much, as long as it's not something ancient (except when you need to work around a particular filesystem bug), so I just assumed you and German were talking about the kernel client. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com