Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ah, OK, misunderstood the question.

In my experience, no. I run the corresponding smartctl command on every drive just before OSD daemon start. I use smartctl because it applies to SAS and SATA drives with the same command (otherwise, you need to select between hdparm and sdparm). All SAS drives I got came with write cache disabled by default, however.

I think the blog post gives a very good explanation why disabling volatile write cache on any drive is either beneficial or has no effect and, therefore, is always safe (recommended). At least I read it this way and I have no contradicting evidence.

To get back to the last part of your question, I think if the OSD daemon just did it by default, a lot of people would have a better life.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Paul Emmerich <paul.emmerich@xxxxxxxx>
Sent: 24 June 2020 17:39:16
To: Frank Schilder
Cc: Frank R; Benoît Knecht; s.priebe@xxxxxxxxxxxx; ceph-users@xxxxxxx
Subject: Re:  Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs

Well, what I was saying was "does it hurt to unconditionally run hdparm -W 0 on all disks?"

Which disk would suffer from this? I haven't seen any disk where this would be a bad idea


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90


On Wed, Jun 24, 2020 at 5:35 PM Frank Schilder <frans@xxxxxx<mailto:frans@xxxxxx>> wrote:
Yes, non-volatile write cache helps as described in the wiki. When you disable write cache with hdparm, it actually only disables volatile write cache. That's why SSDs with power loss protection are recommended for ceph.

A SAS/SATA SSD without any write cache will perform poorly no matter what.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Paul Emmerich <paul.emmerich@xxxxxxxx<mailto:paul.emmerich@xxxxxxxx>>
Sent: 24 June 2020 17:30:51
To: Frank R
Cc: Benoît Knecht; s.priebe@xxxxxxxxxxxx<mailto:s.priebe@xxxxxxxxxxxx>; ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject:  Re: High ceph_osd_commit_latency_ms on Toshiba MG07ACA14TE HDDs

Has anyone ever encountered a drive with a write cache that actually
*helped*?
I haven't.

As in: would it be a good idea for the OSD to just disable the write cache
on startup? Worst case it doesn't do anything, best case it improves
latency.

Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90


On Wed, Jun 24, 2020 at 3:49 PM Frank R <frankaritchie@xxxxxxxxx<mailto:frankaritchie@xxxxxxxxx>> wrote:

> fyi, there is an interesting note on disabling the write cache here:
>
>
> https://yourcmc.ru/wiki/index.php?title=Ceph_performance&mobileaction=toggle_view_desktop#Drive_cache_is_slowing_you_down
>
> On Wed, Jun 24, 2020 at 9:45 AM Benoît Knecht <bknecht@xxxxxxxxxxxxx<mailto:bknecht@xxxxxxxxxxxxx>>
> wrote:
> >
> > Hi Igor,
> >
> > Igor Fedotov wrote:
> > > for the sake of completeness one more experiment please if possible:
> > >
> > > turn off write cache for HGST drives and measure commit latency once
> again.
> >
> > I just did the same experiment with HGST drives, and disabling the write
> cache
> > on those drives brought the latency down from about 7.5ms to about 4ms.
> >
> > So it seems disabling the write cache across the board would be
> advisable in
> > our case. Is it recommended in general, or specifically when the DB+WAL
> is on
> > the same hard drive?
> >
> > Stefan, Mark, are you disabling the write cache on your HDDs by default?
> >
> > Cheers,
> >
> > --
> > Ben
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux