Re: Planning all flash cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adding more nodes from the beginning would probably be a good idea.

On Wed, Jun 20, 2018 at 12:58 PM Nick A <nick.bmth@xxxxxxxxx> wrote:
>
> Hello Everyone,
>
> We're planning a small cluster on a budget, and I'd like to request any feedback or tips.
>
> 3x Dell R720XD with:
> 2x Xeon E5-2680v2 or very similar
The CPUs look good and sufficiently fast for IOPS.

> 96GB RAM
4GB per OSD looks a bit on the short side. Probably 192G would help.

> 2x Samsung SM863 240GB boot/OS drives
> 4x Samsung SM863 960GB OSD drives
> Dual 40/56Gbit Infiniband using IPoIB.
>
> 3 replica, MON on OSD nodes, RBD only (no object or CephFS).
>
> We'll probably add another 2 OSD drives per month per node until full (24 SSD's per node), at which point, more nodes. We've got a few SM863's in production on other system and are seriously impressed with them, so would like to use them for Ceph too.
>
> We're hoping this is going to provide a decent amount of IOPS, 20k would be ideal. I'd like to avoid NVMe Journals unless it's going to make a truly massive difference. Same with carving up the SSD's, would rather not, and just keep it as simple as possible.
I agree: those SSDs shouldn't really require a journal device. Not
sure about the 20k IOPS specially without any further information.
Doing 20k IOPS at 1kB block is totally different at 1MB block...
>
> Is there anything that obviously stands out as severely unbalanced? The R720XD comes with a H710 - instead of putting them in RAID0, I'm thinking a different HBA might be a better idea, any recommendations please?
Don't know that HBA. Does it support pass through mode or HBA mode?
>
> Regards,
> Nick
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux