Re: Dell Ceph Hardware recommendations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alex Leake <A.M.D.Leake@...> writes:

> 
> Hello Michael​,
> 
> I maintain a small Ceph cluster at the University of Bath, our cluster 
consists of:
> 
> Monitors:
> 3 x Dell PowerEdge R630
> 
>  - 2x Intel(R) Xeon(R) CPU E5-2609 v3
>  - 64GB RAM
>  - 4x 300GB SAS (RAID 10)
> 
> OSD Nodes:
> 6 x Dell PowerEdge R730XD & MD1400 Shelves
> 
>  - 2x Intel(R) Xeon(R) CPU E5-2650
>  - 128GB RAM
>  - 2x 600GB SAS (OS - RAID1)
>  - 2x 200GB SSD (PERC H730)
>  - 14x 6TB NL-SAS (PERC H730)
>  - 12x 4TB NL-SAS (PERC H830 - MD1400)
> 
> Please let me know if you want any more info.
> 
> In my experience thus far, I've found this ratio is not useful for cache 
tiering etc - the SSDs are in a
> separate pool.
> 
> If I could start over, I'd go for fewer OSDs / host - and no SSDs (or a 
much better ratio - like 4:1).
> 
> Kind Regards,
> Alex.
> _______________________________________________
> ceph-users mailing list
> ceph-users <at> lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

I'm really glad you noted this, I was just following Redhat/SuperMicro 
deployment reference architecture 
(https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-
INC0270868_v2_0715.pdf) page 11 noted 12 disks per 7xx intel ssd.  So I was 
debating if it might have been suitable.  I try and have only 4 spinning 
disks per SSD cache.

If I get 4TB NL-SAS drives, how big would the SSD need to be?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux