Re: severe librbd performance degradation in Giant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/18/2014 04:49 AM, Alexandre DERUMIER wrote:
According http://tracker.ceph.com/issues/9513, do you mean that rbd
cache will make 10x performance degradation for random read?

Hi, on my side, I don't see any degradation performance on read (seq or rand)  with or without.

firefly : around 12000iops (with or without rbd_cache)
giant : around 12000iops  (with or without rbd_cache)

(and I can reach around 20000-30000 iops on giant with disabling optracker).


rbd_cache only improve write performance for me (4k block )

I can't do it right now since I'm in the middle of reinstalling fedora on the test nodes, but I will try to replicate this as well if we haven't figured it out before hand.

Mark




----- Mail original -----

De: "Haomai Wang" <haomaiwang@xxxxxxxxx>
À: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx>
Cc: "Sage Weil" <sweil@xxxxxxxxxx>, "Josh Durgin" <josh.durgin@xxxxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx
Envoyé: Jeudi 18 Septembre 2014 04:27:56
Objet: Re: severe librbd performance degradation in Giant

According http://tracker.ceph.com/issues/9513, do you mean that rbd
cache will make 10x performance degradation for random read?

On Thu, Sep 18, 2014 at 7:44 AM, Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote:
Josh/Sage,
I should mention that even after turning off rbd cache I am getting ~20% degradation over Firefly.

Thanks & Regards
Somnath

-----Original Message-----
From: Somnath Roy
Sent: Wednesday, September 17, 2014 2:44 PM
To: Sage Weil
Cc: Josh Durgin; ceph-devel@xxxxxxxxxxxxxxx
Subject: RE: severe librbd performance degradation in Giant

Created a tracker for this.

http://tracker.ceph.com/issues/9513

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Somnath Roy
Sent: Wednesday, September 17, 2014 2:39 PM
To: Sage Weil
Cc: Josh Durgin; ceph-devel@xxxxxxxxxxxxxxx
Subject: RE: severe librbd performance degradation in Giant

Sage,
It's a 4K random read.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil@xxxxxxxxxx]
Sent: Wednesday, September 17, 2014 2:36 PM
To: Somnath Roy
Cc: Josh Durgin; ceph-devel@xxxxxxxxxxxxxxx
Subject: RE: severe librbd performance degradation in Giant

What was the io pattern? Sequential or random? For random a slowdown makes sense (tho maybe not 10x!) but not for sequentail....

s

On Wed, 17 Sep 2014, Somnath Roy wrote:

I set the following in the client side /etc/ceph/ceph.conf where I am running fio rbd.

rbd_cache_writethrough_until_flush = false

But, no difference. BTW, I am doing Random read, not write. Still this setting applies ?

Next, I tried to tweak the rbd_cache setting to false and I *got back* the old performance. Now, it is similar to firefly throughput !

So, loks like rbd_cache=true was the culprit.

Thanks Josh !

Regards
Somnath

-----Original Message-----
From: Josh Durgin [mailto:josh.durgin@xxxxxxxxxxx]
Sent: Wednesday, September 17, 2014 2:20 PM
To: Somnath Roy; ceph-devel@xxxxxxxxxxxxxxx
Subject: Re: severe librbd performance degradation in Giant

On 09/17/2014 01:55 PM, Somnath Roy wrote:
Hi Sage,
We are experiencing severe librbd performance degradation in Giant over firefly release. Here is the experiment we did to isolate it as a librbd problem.

1. Single OSD is running latest Giant and client is running fio rbd on top of firefly based librbd/librados. For one client it is giving ~11-12K iops (4K RR).
2. Single OSD is running Giant and client is running fio rbd on top of Giant based librbd/librados. For one client it is giving ~1.9K iops (4K RR).
3. Single OSD is running latest Giant and client is running Giant based ceph_smaiobench on top of giant librados. For one client it is giving ~11-12K iops (4K RR).
4. Giant RGW on top of Giant OSD is also scaling.


So, it is obvious from the above that recent librbd has issues. I will raise a tracker to track this.

For giant the default cache settings changed to:

rbd cache = true
rbd cache writethrough until flush = true

If fio isn't sending flushes as the test is running, the cache will stay in writethrough mode. Does the difference remain if you set rbd cache writethrough until flush = false ?

Josh

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo
info at http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux