Tuning Nautilus for flash only

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

We've deployed a new flash only ceph cluster running Nautilus and I'm
currently looking at any tunables we should set to get the most out of
our NVMe SSDs.

I've been looking a bit at the options from the blog post here:

https://ceph.io/community/bluestore-default-vs-tuned-performance-comparison/

with the conf here:
https://gist.github.com/likid0/1b52631ff5d0d649a22a3f30106ccea7

However some of them, like checksumming, is for testing speed only but
not really applicable in a real life scenario with critical data.

Should we stick with defaults or is there anything that could help?

We have 256GB of RAM on each OSD host, 8 OSD hosts with 10 SSDs on
each. 2 osd daemons on each SSD. Raise ssd bluestore cache to 8GB?

Workload is about 50/50 r/w ops running qemu VMs through librbd. So
mixed block size.

3 replicas.

Appreciate any advice!

Kind Regards,
-- 
David Majchrzak


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux