Bulk storage use case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



An other thought, I would hope that with EC, data chunks spreads would profits of each drives writes capability where there will be stored.

I did not get any rely for now ! Does this kind of configuration (hard & soft) looks crazy ?! Am I missing something ?

Looking forward for your comments, thanks in advance. 

--
C?dric Lemarchand

> Le 7 mai 2014 ? 22:10, Cedric Lemarchand <cedric at yipikai.org> a ?crit :
> 
> Some more details, the io pattern will be around 90%write 10%read, mainly sequential.
> Recent posts shows that max_backfills, recovery_max_active and recovery_op_priority settings will be helpful in case of backfilling/re balancing.
> 
> Any thoughts on such hardware setup ?
> 
> Le 07/05/2014 11:43, Cedric Lemarchand a ?crit :
>> Hello,
>> 
>> This build is only intended for archiving purpose, what matter here is lowering ratio $/To/W.
>> Access to the storage would be via radosgw, installed on each nodes. I need that each nodes sustain an average of 1Gb write rates, for which I think it would not be a problem. Erasure encoding will be used with something like k=12 m=3.
>> 
>> A typical node would be :
>> 
>> - Supermicro 36 bays
>> - 2x Xeon E5-2630Lv2
>> - 96Go ram (recommended ratio 1Go/To for OSD is lowered a bit ... )
>> - HBA LSI adaptaters, JBOD mode, could be 2x 9207-8i
>> - 36 HDD 4To with default journals config
>> - dedicated bonded 2Gb links for public/private networks (backfilling will takes ages if a full node is lost ...)
>> 
>> 
>> I think in an *optimal* state (ceph healthy), it could handle the job. Waiting for your comment.
>> 
>> What is bothering me more is cases of OSD maintenance operations like backfilling and cluster re balancing, where nodes will be put under very hight IO/memory and CPU load during hours/days. Does the latency will *just* grow up, or does everything will fly away ? (OOMK spawn, OSD suicides because of latency, node pushed out of the cluster, ect ... )
>> 
>> As you understand I am trying to design the cluster with in mind a sweet spot like "things becomes slow, latency grow up, but the node stay stable/usable and aren't pushed out of the cluster".
>> 
>> This is my first jump into Ceph, so any inputs will be greatly appreciated ;-)
>> 
>> Cheers,
>> 
>> --
>> C?dric 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> -- 
> C?dric
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140509/98f3d7bf/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux