>> Interconnect as currently planned: >> 4 x 1Gbit LACP Bonds over a pair of MLAG-capable switches (planned: >> EX3300) > If you can do 10GB networking its really worth it. I found that with 1G, > latency effects your performance before you max out the bandwidth. We got > some Supermicro servers with 10GB-T onboard for a tiny price difference and > some basic 10GB-T switches. I do not except to max out the bandwidth. My estimation would be 200 MB/s r/w are needed at maximum. The performance-metric that suffers most as far as I read would be IOPS? How many IOps do you think will be possible with 8 x 4osd-nodes with 4x1Gbit (distributed among all the clients, VMs, etc) >> 250GB SSD - Journal (MX200 250GB with extreme over-provisioning, >> staggered deployment, monitored for TBW-Value) > Not sure if that SSD would be suitable for a journal. I would recommend > going with one of the Intel 3700's. You could also save a bit and run the OS > from it. I am still on the fence about ditching the SATA-DOM and install the OS on the SSD as well. If the MX200 turn out to be unsuited, I can still use them for other purposes and fetch some better SSDs later. >> Seagate Surveillance HDD (ST3000VX000) 7200rpm > Would also possibly consider a more NAS/Enterprise friendly HDD I thought video-surveillance HDDs would be a nice fit, they are build to run 24/7 and to write multiple data-stream at the same time to disk. Also cheap, which enables me to get more nodes from the start. > CPU might be on the limit, but would probably suffice. If anything you > won't max out all the cores, but the overall speed of the CPU might > increase latency, which may or may not be a problem for you. Do you have some values, so that I can imagine the difference? I also maintain another cluster with dual-socket hexa-core Xeon 12osd-nodes and all the CPUs do is idling. And the 2x10G LACP Link is usually never used above 1 Gbit. Hence the focus on cost-efficiency with this build. >> Are there any cost-effective suggestions to improve this configuration? > Have you looked at a normal Xeon based server but with more disks per > node? Depending on how much capacity you need spending a little more > per server but allowing you to have more disks per server might work > out cheaper. > There are some interesting SuperMicro combinations, or if you want to > go really cheap, you could buy Case,MB,CPU...etc separately and build > yourself. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com