That PDF specifically calls for P3700 NVMe SSDs, not the consumer 750. You need high endurance drives usually.
I’m using 1x400GB Intel P3700 per 9 OSDs (so 4xP3700 per 36 disk chassis).
Alex Leake <A.M.D.Leake@...> writes: Hello Michael,
I maintain a small Ceph cluster at the University of Bath, our cluster
consists of: Monitors: 3 x Dell PowerEdge R630
- 2x Intel(R) Xeon(R) CPU E5-2609 v3 - 64GB RAM - 4x 300GB SAS (RAID 10)
OSD Nodes: 6 x Dell PowerEdge R730XD & MD1400 Shelves
- 2x Intel(R) Xeon(R) CPU E5-2650 - 128GB RAM - 2x 600GB SAS (OS - RAID1) - 2x 200GB SSD (PERC H730) - 14x 6TB NL-SAS (PERC H730) - 12x 4TB NL-SAS (PERC H830 - MD1400)
Please let me know if you want any more info.
In my experience thus far, I've found this ratio is not useful for cache
tiering etc - the SSDs are in aseparate pool.
If I could start over, I'd go for fewer OSDs / host - and no SSDs (or a
much better ratio - like 4:1). Kind Regards, Alex. _______________________________________________ ceph-users mailing list ceph-users <at> lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm really glad you noted this, I was just following Redhat/SuperMicro deployment reference architecture (https://www.redhat.com/en/files/resources/en-rhst-cephstorage-supermicro-INC0270868_v2_0715.pdf) page 11 noted 12 disks per 7xx intel ssd. So I was debating if it might have been suitable. I try and have only 4 spinning disks per SSD cache.If I get 4TB NL-SAS drives, how big would the SSD need to be?_______________________________________________ceph-users mailing listceph-users@xxxxxxxxxxxxxxhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com