Fwd: Planning all flash cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



adding back in the list :)

---------- Forwarded message ---------
From: Luis Periquito <periquito@xxxxxxxxx>
Date: Wed, Jun 20, 2018 at 1:54 PM
Subject: Re:  Planning all flash cluster
To: <nick.bmth@xxxxxxxxx>


On Wed, Jun 20, 2018 at 1:35 PM Nick A <nick.bmth@xxxxxxxxx> wrote:
>
> Thank you, I was under the impression that 4GB RAM per 1TB was quite generous, or is that not the case with all flash clusters? What's the recommended RAM per OSD currently? Happy to throw more at it for a performance boost. The important thing is that I'd like all nodes to be absolutely identical.
I'm doing 8G per OSD, though I use 1.9T SSDs.

>
> Based on replies so far, it looks like 5 nodes might be a better idea, maybe each with 14 OSD's (960GB SSD's)? Plenty of 16 slot 2U chassis around to make it a no brainer if that's what you'd recommend!
I tend to add more nodes: 1U with 4-8 SSDs per chassis to start with,
and using a single CPU with high frequency. For IOPS/latency cpu
frequency is really important.
I have started a cluster that only has 2 SSDs (which I share with the
OS) for data, but has 8 nodes. Those servers can take up to 10 drives.

I'm using the Fujitsu RX1330, believe Dell would be the R330, with a
Intel E3-1230v6 cpu and 64G of ram, dual 10G and PSAS (passthrough
controller).

>
> The H710 doesn't do JBOD or passthrough, hence looking for an alternative HBA. It would be nice to do the boot drives as hardware RAID 1 though, so a card that can do both at the same time (like the H730 found R630's etc) would be ideal.
>
> Regards,
> Nick
>
> On 20 June 2018 at 13:18, Luis Periquito <periquito@xxxxxxxxx> wrote:
>>
>> Adding more nodes from the beginning would probably be a good idea.
>>
>> On Wed, Jun 20, 2018 at 12:58 PM Nick A <nick.bmth@xxxxxxxxx> wrote:
>> >
>> > Hello Everyone,
>> >
>> > We're planning a small cluster on a budget, and I'd like to request any feedback or tips.
>> >
>> > 3x Dell R720XD with:
>> > 2x Xeon E5-2680v2 or very similar
>> The CPUs look good and sufficiently fast for IOPS.
>>
>> > 96GB RAM
>> 4GB per OSD looks a bit on the short side. Probably 192G would help.
>>
>> > 2x Samsung SM863 240GB boot/OS drives
>> > 4x Samsung SM863 960GB OSD drives
>> > Dual 40/56Gbit Infiniband using IPoIB.
>> >
>> > 3 replica, MON on OSD nodes, RBD only (no object or CephFS).
>> >
>> > We'll probably add another 2 OSD drives per month per node until full (24 SSD's per node), at which point, more nodes. We've got a few SM863's in production on other system and are seriously impressed with them, so would like to use them for Ceph too.
>> >
>> > We're hoping this is going to provide a decent amount of IOPS, 20k would be ideal. I'd like to avoid NVMe Journals unless it's going to make a truly massive difference. Same with carving up the SSD's, would rather not, and just keep it as simple as possible.
>> I agree: those SSDs shouldn't really require a journal device. Not
>> sure about the 20k IOPS specially without any further information.
>> Doing 20k IOPS at 1kB block is totally different at 1MB block...
>> >
>> > Is there anything that obviously stands out as severely unbalanced? The R720XD comes with a H710 - instead of putting them in RAID0, I'm thinking a different HBA might be a better idea, any recommendations please?
>> Don't know that HBA. Does it support pass through mode or HBA mode?
>> >
>> > Regards,
>> > Nick
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux