Re: [a bit off-topic] Power usage estimation of hardware for Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Christian Balzer wrote:

>> I'm not sure to well understand: the model that I indicated in the link
>> above (page 2, model SSG-6027R-OSD040H in the table) already have hotswap
>> bays in the back, for OS drives.
>>
> Yes, but that model is pre-configured:
>  2x 2.5" 400GB SSDs, 10x 3.5" 4TB SATA3 HDDs
>  Rear 2.5" Hot-swap OS drives (mirrored 80GB SSD)
> 
> What model SSDs and HDDs are those anyway?
> 
> Instead you could use the basically same thing:
> http://www.supermicro.com.tw/products/system/2U/6028/SSG-6028R-E1CR12L.cfm
> 
> And put 12 HDDs (of your choice) in the front and 2 fast and durable SSDs
> for journals (and OS) in the back.

Ok, thx for the link.

Sorry for this another question but: are they people that use raid 1 software
in the journals dedicated SSDs? For instance, put 3 journals of OSD 1, 2, 3 in
SSD1 and create a raid 1 software between SSD1 and SSD2 so that, if SSD1 crashes
the OSD 1, 2, 3 are always alive. It seems to me that few people use raid 1
software between journal dedicated SSDs, Am I wrong? It could be a good idea
to minimize the risk of loosing a set of OSDs when an SSD crashes. Of course,
I imagine that I should decrease the journals per SSD ratio in this case.

>>> At this level of density, you'd need about 24GHz combined CPU power to
>>> fully utilize the IOPS potentioal of a pure HDD based node.
>>
>> Ok, can I consider that the general rule is ~1Ghz per OSD in HDD (no
>> separate journal)?
>
> Yes, as per the numerous hardware configuration guides.
> No separate SSD journal, journal on the same HDD.

Ok.

>>> The moment you add SSD journals to this picture, that number at _least_
>>> doubles, making it a potentially very power hungry unit.
>>
>> So, if I understand well, I should estimate ~2Ghz per OSD with journal in
>> separate SSD. Is that correct?
>
> At least.
> With a fio like this inside a VM:
> ---
> fio --size=4G --ioengine=libaio --invalidate=1 --direct=1 --numjobs=1 --rw=randwrite --name=fiojob --blocksize=4K --iodepth=32
> --
> I can make all 8 3.1GHz core on an 8 OSD server with 4 journal SSDs reach
> 100% utilization.
> 
> CPU to IOPS ratio is likely to improve as Ceph improves, but this is with
> Firefly and I doubt even the just released Hammer would change that very
> much.

[...]

>> Is a HBA/RAID systematically necessary?
>>
> With the model you cited, yes. All those ports need to be connected up (4
> SATA links per port).
> 
>> Are there some hardware configurations where it could possible to
>> consider disks directly connected to the controller of the motherboard? 
>>
> Depends on the motherboard, see the concurrent "Motherboard" thread on
> this ML.

Ok, thx a lot Christian for all this information and feedbacks.

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux