Hi Christian, Christian Balzer wrote: >> Sorry in advance for this thread not directly linked to Ceph. ;) >> We are thinking about buying servers to build a ceph cluster and we >> would like to have, if possible, a *approximative* power usage >> estimation of these servers (this parameter could be important in >> your choice): >> > In short, way, way, way too many variables. > Which CPUs, HDDs/SSDs, PSUs. > And a lightly loaded cluster/node will consume like 1/3rd of the power CPU > wise than a very busy one does. Yes indeed. It's just to have a very approximative idea. >> 1. the 12xbays supermicro OSD node >> (here https://www.supermicro.com/solutions/datasheet_Ceph.pdf, >> page 2, model SSG-6027R-OSD040H in the table) >> > I'd really wish SM would revise that pamphlet, for nearly all the roles in > there they have better suited models. > And models that fill requirements not really covered in that sheet. Ah, err... could you be more precise? Which models have you in the head? Do you have links? > If you're willing to take the 1:5 SSD journal to OSD ratio risk, as > proposed by that configuration, why not go all out to a chassis that has 2 > hotswap bays in the back and 1:6. Much better density and you'll have > journals and HDDs on different SATA buses. I'm not sure to well understand: the model that I indicated in the link above (page 2, model SSG-6027R-OSD040H in the table) already have hotswap bays in the back, for OS drives. >> 2. SC216-based chassis 2U, 24xbays 2.5" (like this one for instance >> http://www.supermicro.com/products/chassis/2U/216/SC216BA-R1K28LP.cfm) >> > > At this level of density, you'd need about 24GHz combined CPU power to > fully utilize the IOPS potentioal of a pure HDD based node. Ok, can I consider that the general rule is ~1Ghz per OSD in HDD (no separate journal)? > The moment you add SSD journals to this picture, that number at _least_ > doubles, making it a potentially very power hungry unit. So, if I understand well, I should estimate ~2Ghz per OSD with journal in separate SSD. Is that correct? > You'll also need a HBA/RAID card to connect up those 6 mini-SAS ports on > the backplane. Is a HBA/RAID systematically necessary? Are there some hardware configurations where it could possible to consider disks directly connected to the controller of the motherboard? > If you're concerned about power, look at their X10 offerings with Titanium > level PSUs and pick CPUs that are energy efficient while still having > enough capacity to satisfy your IOPS needs. Ok. >> If someone here has a server as above, we would be curious to have >> a appromative power usage estimation (for instance in volt-ampere). >> > A SM server (not running Ceph, but as a mailbox server being somewhat > comparable) here with Platinum (94% efficiency supposedly) PSUs consumes > while being basically idle 105W on the input side (100V in Japan) and 95W > on the output side. > This triples basically during peak utilization times. Ok, thank you for your help Christian. :) PS: it's curious your message doesn't appear in the archive: http://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg18699.html -- François Lafont _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com