Re: IO scheduler & osd_disk_thread_ioprio_class

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan,

What SSD model?

I've seen SSDs work quite well usually but suddenly give a totally awful performance for some time (not those 8K you see though).

I think there was some kind of firmware process involved, I had to replace the drive with a serious DC one.

El 23/06/15 a las 14:07, Jan Schermer escribió:
Yes, but that’s a separate issue :-)
Some drives are just slow (100 IOPS) for synchronous writes with no other load.
The drives I’m testing have ~8K IOPS when not under load - having them drop to 10 IOPS is a huge problem. If it’s indeed a CFQ problem (as I suspect) then no matter what drive you have you will have problems.

Jan

On 23 Jun 2015, at 14:03, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

Oh sorry, I had missed that. Indeed that is surprising. Did you read
the recent thread ("SSD IO performance") discussing the relevance of
O_DSYNC performance for the journal?

Cheers, Dan

On Tue, Jun 23, 2015 at 1:54 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
I only use SSDs, which is why I’m so surprised at the CFQ behaviour - the drive can sustain tens of thousand of reads per second, thousands of writes - yet saturating it with reads drops the writes to 10 IOPS - that’s mind boggling to me.

Jan

On 23 Jun 2015, at 13:43, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:

On Tue, Jun 23, 2015 at 1:37 PM, Jan Schermer <jan@xxxxxxxxxxx> wrote:
Yes, I use the same drive

one partition for journal
other for xfs with filestore

I am seeing slow requests when backfills are occuring - backfills hit the filestore but slow requests are (most probably) writes going to the journal - 10 IOPS is just to few for anything.


My Ceph version is dumpling - that explains the integers.
So it’s possible it doesn’t work at all?
I thought that bug was fixed. You can check if it worked by using
"iotop -b -n1" and looking for threads with the idle priority.

Bad news about the backfills no being in the disk thread, I might have to use deadline after all.
If your experience follows the same paths of most users, eventually
deep scrubs will cause latency issues and you'll switch back to cfq
plus ionicing the disk thread.

Are you using Ceph RBD or object storage? If RBD, eventually you'll
find that you need to put the journals on an SSD.

Cheers, Dan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
      943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux