I think I might have to step out on this one, it sounds like you have all the basics covered for best performance and I can’t think what else to suggest. Sorry I couldn’t be of more help. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Xu (Simon) Chen Sent: 31 October 2014 20:15 To: Nick Fisk Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: [ceph-users] prioritizing reads over writes We have SSD journals, backend disks are actually on SSD-fronted bcache devices in writeback mode. The client VMs have rbd cache enabled too... On Fri, Oct 31, 2014 at 4:07 PM, Nick Fisk <nick@xxxxxxxxxx> wrote: Hmmm, it sounds like you are just saturating the spindles to the point that latency starts to climb to unacceptable levels. The problem being that no matter how much tuning you apply, at some point the writes will have to start being put down to the disk and at that point performance will suffer. Do your OSD’s have SSD journals? In storage, normally adding some sort of writeback cache (in Ceph’s case Journals) help to lessen the impact of writes by asorbing bursts of writes and by coalescing writes into a more sequential pattern to the underlying disks. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Xu (Simon) Chen Sent: 31 October 2014 19:51 To: Nick Fisk Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: prioritizing reads over writes I am already using deadline scheduler, with the default parameters: I remember tuning them before, didn't make a great difference. Hi Simon, Have you tried using the Deadline scheduler on the Linux nodes? The deadline scheduler prioritises reads over writes. I believe it tries to service all reads within 500ms whilst writes can be delayed up to 5s. I don’t the exact effect Ceph will have over the top of this, but this would be the first thing I would try. Nick From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Xu (Simon) Chen Sent: 31 October 2014 19:37 To: ceph-users@xxxxxxxxxxxxxx Subject: prioritizing reads over writes Hi all, My workload is mostly writes, but when the writes reach a certain throughput (iops wise not much higher) the read throughput would tank. This seems to be impacting my VMs' responsiveness overall. Reads would recover after write throughput drops. Is there any way to prioritize read over write, or at least guarantee a certain level of aggregated read throughput in a cluster?
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://xo4t.mjt.lu/link/xo4t/tsn3tlt/1/J2miz1ulcJHX7Mrh6NXZdQ/aHR0cDovL2xpc3RzLmNlcGguY29tL2xpc3RpbmZvLmNnaS9jZXBoLXVzZXJzLWNlcGguY29t
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com