Re: [a bit off-topic] Power usage estimation of hardware for Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 10 Apr 2015 09:00:40 +0200 Francois Lafont wrote:

> Hi Christian,
> 
> Christian Balzer wrote:
> 
> >> Sorry in advance for this thread not directly linked to Ceph. ;)
> >> We are thinking about buying servers to build a ceph cluster and we
> >> would like to have, if possible, a *approximative* power usage
> >> estimation of these servers (this parameter could be important in
> >> your choice):
> >>
> > In short, way, way, way too many variables.
> > Which CPUs, HDDs/SSDs, PSUs. 
> > And a lightly loaded cluster/node will consume like 1/3rd of the power
> > CPU wise than a very busy one does.
> 
> Yes indeed. It's just to have a very approximative idea.
> 
> >> 1. the 12xbays supermicro OSD node
> >>    (here https://www.supermicro.com/solutions/datasheet_Ceph.pdf,
> >>    page 2, model SSG-6027R-OSD040H in the table)
> >>
> > I'd really wish SM would revise that pamphlet, for nearly all the
> > roles in there they have better suited models. 
> > And models that fill requirements not really covered in that sheet.
> 
> Ah, err... could you be more precise? Which models have you in the head?
> Do you have links?
> 
Depends on your use case really, density, cost, HDDs or SSDs for OSDs,
etc.

See below for an example.

> > If you're willing to take the 1:5 SSD journal to OSD ratio risk, as
> > proposed by that configuration, why not go all out to a chassis that
> > has 2 hotswap bays in the back and 1:6. Much better density and you'll
> > have journals and HDDs on different SATA buses.
> 
> I'm not sure to well understand: the model that I indicated in the link
> above (page 2, model SSG-6027R-OSD040H in the table) already have hotswap
> bays in the back, for OS drives.
> 
Yes, but that model is pre-configured:
 2x 2.5" 400GB SSDs, 10x 3.5" 4TB SATA3 HDDs
 Rear 2.5" Hot-swap OS drives (mirrored 80GB SSD)

What model SSDs and HDDs are those anyway?

Instead you could use the basically same thing:
http://www.supermicro.com.tw/products/system/2U/6028/SSG-6028R-E1CR12L.cfm

And put 12 HDDs (of your choice) in the front and 2 fast and durable SSDs
for journals (and OS) in the back.

> >> 2. SC216-based chassis 2U, 24xbays 2.5" (like this one for instance
> >>    http://www.supermicro.com/products/chassis/2U/216/SC216BA-R1K28LP.cfm)
> >>
> > 
> > At this level of density, you'd need about 24GHz combined CPU power to
> > fully utilize the IOPS potentioal of a pure HDD based node.
> 
> Ok, can I consider that the general rule is ~1Ghz per OSD in HDD (no
> separate journal)?
> 
Yes, as per the numerous hardware configuration guides.
No separate SSD journal, journal on the same HDD.


> > The moment you add SSD journals to this picture, that number at _least_
> > doubles, making it a potentially very power hungry unit.
> 
> So, if I understand well, I should estimate ~2Ghz per OSD with journal in
> separate SSD. Is that correct?
> 
At least.
With a fio like this inside a VM:
---
fio --size=4G --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1 --rw=randwrite --name=fiojob --blocksize=4K --iodepth=32
--
I can make all 8 3.1GHz core on an 8 OSD server with 4 journal SSDs reach
100% utilization. 

CPU to IOPS ratio is likely to improve as Ceph improves, but this is with
Firefly and I doubt even the just released Hammer would change that very
much.

> > You'll also need a HBA/RAID card to connect up those 6 mini-SAS ports
> > on the backplane.
> 
> Is a HBA/RAID systematically necessary?
> 
With the model you cited, yes. All those ports need to be connected up (4
SATA links per port).

> Are there some hardware configurations where it could possible to
> consider disks directly connected to the controller of the motherboard? 
>
Depends on the motherboard, see the concurrent "Motherboard" thread on
this ML.

> > If you're concerned about power, look at their X10 offerings with
> > Titanium level PSUs and pick CPUs that are energy efficient while
> > still having enough capacity to satisfy your IOPS needs.
> 
> Ok.
> 
> >> If someone here has a server as above, we would be curious to have
> >> a appromative power usage estimation (for instance in volt-ampere).
> >>
> > A SM server (not running Ceph, but as a mailbox server being somewhat
> > comparable) here with Platinum (94% efficiency supposedly) PSUs
> > consumes while being basically idle 105W on the input side (100V in
> > Japan) and 95W on the output side.
> > This triples basically during peak utilization times.
> 
> Ok, thank you for your help Christian. :)
> 
> PS: it's curious your message doesn't appear in the archive:
> http://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg18699.html
> 
Given how unreliable/delayed the CephML in general is and that the
ceph/inktank sites are currently unreachable from here to boot, I'm not
particular suprised (or worried).

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux