Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So hdparam -W 0 /dev/sdx doesn't work or it makes no difference?  Also I am not sure I understand why it should happen before OSD have been started.  At least in my experience hdparam does it to hardware regardless.

On Mon, Jan 20, 2020, 2:25 AM Frank Schilder <frans@xxxxxx> wrote:
We are using Micron 5200 PRO, 1.92TB for RBD images on KVM and are very happy with the performance. We are using EC 6+2 pools, which really eat up IOPs. Still, we get enough performance out to run 20-50 VMs per disk, which results in good space utilisation as well since our default image size is 50GB and we take rolling snapshots. I was thinking about 4TB disks also, but am concerned that their IOPs/TB performance is too low for images on EC pools.

We found the raw throughput in fio benchmarks to be very different for write-cache enabled and disabled, exactly as explained in the performance article. Changing write cache settings is a boot-time operation. Unfortunately, I couldn't find a reliable way to disable write cache at boot time (I was looking for tuned configs) and ended up adding this to a container startup script:

  if [[ "$1" == "osd_ceph_disk_activate" && -n "${OSD_DEVICE}" ]] ; then
    echo "Disabling write cache on ${OSD_DEVICE}"
    /usr/sbin/smartctl -s wcache=off "${OSD_DEVICE}"
  fi

This works for both, SAS and SATA drives and ensures that write cache is disabled before an OSD daemon starts.

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Eric K. Miller <emiller@xxxxxxxxxxxxxxxxxx>
Sent: 19 January 2020 04:24:33
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

Hi Vitaliy,

Similar to Stefan, we have a bunch of Micron 5200's (3.84TB ECO SATA version) in a Ceph cluster (Nautilus) and performance seems less than optimal.  I have followed all instructions on your site (thank you for your wonderful article btw!!), but I haven't seen much change.

The only thing I could think of is that "maybe" disabling the write cache only takes place upon a reboot or power cycle?  Is that necessary?  Or is it a "live" change?

I have tested with the cache disabled as well as enabled on all drives.  We're using fio running in a QEMU/KVM VM in an OpenStack cluster, so not "raw" access to the Micron 5200's.  OSD (Bluestore) nodes run CentOS 7 using a 4.18.x kernel.  Testing doesn't show any, or much, difference, enough that the variations could be considered "noise" in the results.  Certainly no change that anyone could tell.

Thought I'd check to see if you, or anyone else, might have any suggestions specific to the Micron 5200.

We have some Micron 5300's inbound, but probably won't have them here for another few weeks due to Micron's manufacturing delays, so will be able to test these raw drives soon.  I will report back after, but if you know anything about these, I'm all ears. :)

Thank you!

Eric


From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Stefan Bauer
Sent: Tuesday, January 14, 2020 10:28 AM
To: undisclosed-recipients
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]


Thank you all,



performance is indeed better now. Can now go back to sleep ;)



KR



Stefan


-----Ursprüngliche Nachricht-----
Von: Виталий Филиппов <vitalif@xxxxxxxxxx>
Gesendet: Dienstag 14 Januar 2020 10:28
An: Wido den Hollander <wido@xxxxxxxx>; Stefan Bauer <stefan.bauer@xxxxxxxxxxx>
CC: ceph-users@xxxxxxxxxxxxxx
Betreff: Re: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

...disable signatures and rbd cache. I didn't mention it in the email to not repeat myself. But I have it in the article :-)
--
With best regards,
Vitaliy Filippov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux