Re: Planning all flash cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* More small servers give better performance then few big servers, maybe twice the number of servers with half the disks, cpus and RAM
* 2x 10 gbit is usually enough, especially with more servers. that will rarely be the bottleneck (unless you have extreme bandwidth requirements)
* maybe save money by using normal Ethernet unless you already got IB infrastructure around
* you might need to reduce bluestore cache size a little bit (default is 3GB for SSDs) as you are running with 4GB ram per OSD (which is fine, you just might need to tune the settings a little bit)
* SM863a is a great disk, good choice. NVMe db disks are not needed here
* raid controllers are evil in most cases, configure them as JBOD



Paul

2018-06-20 13:58 GMT+02:00 Nick A <nick.bmth@xxxxxxxxx>:
Hello Everyone,

We're planning a small cluster on a budget, and I'd like to request any feedback or tips.

3x Dell R720XD with:
2x Xeon E5-2680v2 or very similar
96GB RAM
2x Samsung SM863 240GB boot/OS drives
4x Samsung SM863 960GB OSD drives
Dual 40/56Gbit Infiniband using IPoIB.

3 replica, MON on OSD nodes, RBD only (no object or CephFS).

We'll probably add another 2 OSD drives per month per node until full (24 SSD's per node), at which point, more nodes. We've got a few SM863's in production on other system and are seriously impressed with them, so would like to use them for Ceph too.

We're hoping this is going to provide a decent amount of IOPS, 20k would be ideal. I'd like to avoid NVMe Journals unless it's going to make a truly massive difference. Same with carving up the SSD's, would rather not, and just keep it as simple as possible.

Is there anything that obviously stands out as severely unbalanced? The R720XD comes with a H710 - instead of putting them in RAID0, I'm thinking a different HBA might be a better idea, any recommendations please?

Regards,
Nick

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux