Yes, but that’s a separate issue :-) Some drives are just slow (100 IOPS) for synchronous writes with no other load. The drives I’m testing have ~8K IOPS when not under load - having them drop to 10 IOPS is a huge problem. If it’s indeed a CFQ problem (as I suspect) then no matter what drive you have you will have problems. Jan > On 23 Jun 2015, at 14:03, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: > > Oh sorry, I had missed that. Indeed that is surprising. Did you read > the recent thread ("SSD IO performance") discussing the relevance of > O_DSYNC performance for the journal? > > Cheers, Dan > > On Tue, Jun 23, 2015 at 1:54 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote: >> I only use SSDs, which is why I’m so surprised at the CFQ behaviour - the drive can sustain tens of thousand of reads per second, thousands of writes - yet saturating it with reads drops the writes to 10 IOPS - that’s mind boggling to me. >> >> Jan >> >>> On 23 Jun 2015, at 13:43, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: >>> >>> On Tue, Jun 23, 2015 at 1:37 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote: >>>> Yes, I use the same drive >>>> >>>> one partition for journal >>>> other for xfs with filestore >>>> >>>> I am seeing slow requests when backfills are occuring - backfills hit the filestore but slow requests are (most probably) writes going to the journal - 10 IOPS is just to few for anything. >>>> >>>> >>>> My Ceph version is dumpling - that explains the integers. >>>> So it’s possible it doesn’t work at all? >>> >>> I thought that bug was fixed. You can check if it worked by using >>> "iotop -b -n1" and looking for threads with the idle priority. >>> >>>> Bad news about the backfills no being in the disk thread, I might have to use deadline after all. >>> >>> If your experience follows the same paths of most users, eventually >>> deep scrubs will cause latency issues and you'll switch back to cfq >>> plus ionicing the disk thread. >>> >>> Are you using Ceph RBD or object storage? If RBD, eventually you'll >>> find that you need to put the journals on an SSD. >>> >>> Cheers, Dan >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com