Re: prioritizing reads over writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have SSD journals, backend disks are actually on SSD-fronted bcache devices in writeback mode. The client VMs have rbd cache enabled too...

-Simon


On Fri, Oct 31, 2014 at 4:07 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:

Hmmm, it sounds like you are just saturating the spindles to the point that latency starts to climb to unacceptable levels. The problem being that no matter how much tuning you apply, at some point the writes will have to start being put down to the disk and at that point performance will suffer.

 

Do your OSD’s have SSD journals?  In storage, normally adding some sort of writeback cache (in Ceph’s case Journals) help to lessen the impact of writes by asorbing bursts of writes and by coalescing writes into a more sequential pattern to the underlying disks.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Xu (Simon) Chen
Sent: 31 October 2014 19:51
To: Nick Fisk
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] prioritizing reads over writes

 

I am already using deadline scheduler, with the default parameters:

read_expire=500

write_expire=5000

writes_starved=2

front_merges=1

fifo_batch=16

 

I remember tuning them before, didn't make a great difference.

 

-Simon

 

On Fri, Oct 31, 2014 at 3:43 PM, Nick Fisk <nick@xxxxxxxxxx> wrote:

Hi Simon,

 

Have you tried using the Deadline scheduler on the Linux nodes? The deadline scheduler prioritises reads over writes. I believe it tries to service all reads within 500ms whilst writes can be delayed up to 5s.

 

I don’t the exact effect Ceph will have over the top of this, but this would be the first thing I would try.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Xu (Simon) Chen
Sent: 31 October 2014 19:37
To: ceph-users@xxxxxxxxxxxxxx
Subject: prioritizing reads over writes

 

Hi all,

 

My workload is mostly writes, but when the writes reach a certain throughput (iops wise not much higher) the read throughput would tank. This seems to be impacting my VMs' responsiveness overall. Reads would recover after write throughput drops.

 

Is there any way to prioritize read over write, or at least guarantee a certain level of aggregated read throughput in a cluster?

 

Thanks.

-Simon


Image removed by sender.

 



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux