Does anyone have a good recommendation for per-OSD memory for EC? My EC test blew up in my face when my OSDs suddenly spiked to 10+ GB per OSD process as soon as any reconstruction was needed. Which (of course) caused OSDs to OOM, which meant more reconstruction, which fairly immediately led to a dead cluster. This was with Giant. Is this typical?
On Fri Feb 06 2015 at 2:41:50 AM Mohamed Pakkeer <mdfakkeer@xxxxxxxxx> wrote:
Hi all,We are building EC cluster with cache tier for CephFS. We are planning to use the following 1U chassis along with Intel SSD DC S3700 for cache tier. It has 10 * 2.5" slots. Could you recommend a suitable Intel processor and amount of RAM to cater 10 * SSDs?.On Fri, Feb 6, 2015 at 2:57 PM, Stephan Seitz <s.seitz@xxxxxxxxxxxxxxxxxxx> wrote:Hi,
Am Dienstag, den 03.02.2015, 15:16 +0000 schrieb Colombo Marco:
> Hi all,
> I have to build a new Ceph storage cluster, after i‘ve read the
> hardware recommendations and some mail from this mailing list i would
> like to buy these servers:
just FYI:
SuperMicro already focuses on ceph with a productline:
http://www.supermicro.com/solutions/datasheet_Ceph.pdf
http://www.supermicro.com/solutions/storage_ceph.cfm
regards,
Stephan Seitz
--
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-44
Fax: 030 / 405051-19
Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht
Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--_______________________________________________Thanks & Regards
K.Mohamed Pakkeer
Mobile- 0091-8754410114
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com