Re: Again - state of Ceph NVMe and SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/16/2016 12:06 PM, David wrote:
Hi!

We’re planning our third ceph cluster and been trying to find how to
maximize IOPS on this one.

Our needs:
* Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
servers)
* Pool for storage of many small files, rbd (probably dovecot maildir
and dovecot index etc)

So I’ve been reading up on:

https://communities.intel.com/community/itpeernetwork/blog/2015/11/20/the-future-ssd-is-here-pcienvme-boosts-ceph-performance

and ceph-users from october 2015:

http://www.spinics.net/lists/ceph-users/msg22494.html

We’re planning something like 5 OSD servers, with:

* 4x 1.2TB Intel S3510
* 8st 4TB HDD
* 2x Intel P3700 Series HHHL PCIe 400GB (one for SSD Pool Journal and
one for HDD pool journal)
* 2x 80GB Intel S3510 raid1 for system
* 256GB RAM
* 2x 8 core CPU Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz or better

This cluster will probably run Hammer LTS unless there are huge
improvements in Infernalis when dealing 4k IOPS.

The first link above hints at awesome performance. The second one from
the list not so much yet..

Is anyone running Hammer or Infernalis with a setup like this?
Is it a sane setup?
Will we become CPU constrained or can we just throw more RAM on it? :D

On the write side you can pretty quickly hit CPU limits, though if you upgrade to tcmalloc 2.4 and set a high thread cache or switch to jemalloc it will help dramatically. There's been various posts and threads about it here on the mailing list, but generally in CPU constrained scenarios people are seeing pretty dramatic improvements. (like 4X IOPs on the write side with SSDs/NVMes).

We are also seeing a dramatic improvement in small random write performance with bluestore, but that's only going to be tech preview in jewel.


Kind Regards,
David Majchrzak


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux