Re: Tuning Nautilus for flash only

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please don't run this config in production.
Disabling checksumming is a bad idea, disabling authentication is also
pretty bad.

There are also a few options in there that no longer exist (osd op
threads) or are no longer relevant (max open files), in general, you
should not blindly copy config files you find on the Internet. Only
set an option to its non-default value after carefully checking what
it does and whether it applies to your use case.

Also, run benchmarks yourself. Use benchmarks that are relevant to
your use case.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

On Thu, Nov 28, 2019 at 1:17 PM Wido den Hollander <wido@xxxxxxxx> wrote:
>
>
>
> On 11/28/19 12:56 PM, David Majchrzak, ODERLAND Webbhotell AB wrote:
> > Hi!
> >
> > We've deployed a new flash only ceph cluster running Nautilus and I'm
> > currently looking at any tunables we should set to get the most out of
> > our NVMe SSDs.
> >
> > I've been looking a bit at the options from the blog post here:
> >
> > https://ceph.io/community/bluestore-default-vs-tuned-performance-comparison/
> >
> > with the conf here:
> > https://gist.github.com/likid0/1b52631ff5d0d649a22a3f30106ccea7
> >
> > However some of them, like checksumming, is for testing speed only but
> > not really applicable in a real life scenario with critical data.
> >
> > Should we stick with defaults or is there anything that could help?
> >
> > We have 256GB of RAM on each OSD host, 8 OSD hosts with 10 SSDs on
> > each. 2 osd daemons on each SSD. Raise ssd bluestore cache to 8GB?
> >
> > Workload is about 50/50 r/w ops running qemu VMs through librbd. So
> > mixed block size.
>
> Pin the C-State of your CPUs to 1 and disable powersaving. That can
> reduce the latency vastly.
>
> Testing with rados bench -t 1 -b 4096 -o 4096 you should be able to get
> to a 0.8ms write latency with 3x replication.
>
> >
> > 3 replicas.
> >
> > Appreciate any advice!
> >
> > Kind Regards,
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux