Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eric,

You say you don't have access to raw drives. What does it mean? Do you run Ceph OSDs inside VMs? In that case you should probably disable Micron caches on the hosts, not just in VMs.

Yes, disabling the write cache only takes place upon a power cycle... or upon the next hotplug of the drive itself.

In some cases - I mean, with some HBAs/RAID controllers - disabling the write cache may not have any impact on performance. As I understand this is because some controllers disable drive write cache themselves by default.

Just benchmark your drives with fio and compare IOPS with https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit

If you get the same ~15k or more iops with -rw=randwrite -fsync=1 -iodepth=1 with both hdparm -W 0 and -W 1 you're good :) if you have cache problems you'll get much less.

About Micron 5300's, please benchmark them when you have them as described here https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit (instructions in the end of the sheet)

Hi Vitaliy,

Similar to Stefan, we have a bunch of Micron 5200's (3.84TB ECO SATA
version) in a Ceph cluster (Nautilus) and performance seems less than
optimal.  I have followed all instructions on your site (thank you for
your wonderful article btw!!), but I haven't seen much change.

The only thing I could think of is that "maybe" disabling the write
cache only takes place upon a reboot or power cycle?  Is that
necessary?  Or is it a "live" change?

I have tested with the cache disabled as well as enabled on all
drives.  We're using fio running in a QEMU/KVM VM in an OpenStack
cluster, so not "raw" access to the Micron 5200's.  OSD (Bluestore)
nodes run CentOS 7 using a 4.18.x kernel.  Testing doesn't show any,
or much, difference, enough that the variations could be considered
"noise" in the results.  Certainly no change that anyone could tell.

Thought I'd check to see if you, or anyone else, might have any
suggestions specific to the Micron 5200.

We have some Micron 5300's inbound, but probably won't have them here
for another few weeks due to Micron's manufacturing delays, so will be
able to test these raw drives soon.  I will report back after, but if
you know anything about these, I'm all ears. :)

Thank you!

Eric

FROM: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] ON BEHALF
OF Stefan Bauer
SENT: Tuesday, January 14, 2020 10:28 AM
TO: undisclosed-recipients
CC: ceph-users@xxxxxxxxxxxxxx
SUBJECT: Re:  low io with enterprise SSDs ceph luminous -
can we expect more? [klartext]

Thank you all,

performance is indeed better now. Can now go back to sleep ;)

KR

Stefan

-----Ursprüngliche Nachricht-----
VON: Виталий Филиппов <vitalif@xxxxxxxxxx>
GESENDET: Dienstag 14 Januar 2020 10:28
AN: Wido den Hollander <wido@xxxxxxxx>; Stefan Bauer
<stefan.bauer@xxxxxxxxxxx>
CC: ceph-users@xxxxxxxxxxxxxx
BETREFF: Re:  low io with enterprise SSDs ceph luminous
- can we expect more? [klartext]

...disable signatures and rbd cache. I didn't mention it in the
email to not repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux