Bulk storage use case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan,

Le 13/05/2014 13:42, Dan van der Ster a ?crit :
> Hi,
> I think you're not getting many replies simply because those are
> rather large servers and not many have such hardware in prod.
Good point.
> We run with 24x3TB drives, 64GB ram, one 10Gbit NIC. Memory-wise there
> are no problems. Throughput-wise, the bottleneck is somewhere between
> the NIC (~1GB/s) and the HBA / SAS backplane (~1.6GB/s). Since writes
> coming in over the network are multiplied by at least 2 times to the
> disks, in our case the HBA is the bottleneck (so we have a practical
> limit of ~800-900MBps).
Your hardware is pretty close from what I am looking for, thanks for info.
> The other factor which which makes it hard to judge your plan is how
> the erasure coding will perform, especially given only a 2Gig network
> between servers. I would guess there is very little prod experience
> with the EC code as of today -- and probably zero with boxes similar
> to what you propose. But my gut tells me that with your proposed
> stripe width of 12/3, combined with the slow network, getting good
> performance might be a challenge.
It would love to hear some advices / recommendation about EC from
Inktank's people ;-)
> I would suggest you start some smaller scale tests to get a feeling
> for the performance before committing to a large purchase of this
> hardware type.
Indeed, without some solid pointers, this is the only way left.

Cheers
> Cheers, Dan
>
> C?dric Lemarchand wrote:
>> Thanks for your answers Craig, it seems this is a niche use case for
>> Ceph, not a lot of replies from the ML.
>>
>> Cheers
>>
>> -- 
>> C?dric Lemarchand
>>
>> Le 11 mai 2014 ? 00:35, Craig Lewis <clewis at centraldesktop.com
>> <mailto:clewis at centraldesktop.com>> a ?crit :
>>
>>> On 5/10/14 12:43 , C?dric Lemarchand wrote:
>>>> Hi Craig,
>>>>
>>>> Thanks, I really appreciate the well detailed response.
>>>>
>>>> I carefully note your advices, specifically about the CPU starvation
>>>> scenario, which as you said sounds scary.
>>>>
>>>> About IO, datas will be very resilient, in case of crash, loosing not
>>>> fully written objects will not be a problem (they will be re uploaded
>>>> later), so I think in this specific case, disabling journaling could
>>>> be a way to improve IO.
>>>> How Ceph will handle that, are there caveats other than just loosing
>>>> objects that was in the data path when the crash occurs ? I know it
>>>> could sounds weird, but clients workflow could support such thing.
>>>>
>>>> Thanks !
>>>>
>>>> -- 
>>>> C?dric Lemarchand
>>>>
>>>> Le 10 mai 2014 ? 04:30, Craig Lewis <clewis at centraldesktop.com
>>>> <mailto:clewis at centraldesktop.com>> a ?crit :
>>>
>>> Disabling the journal does make sense in some cases, like all the data
>>> is a backup copy.
>>>
>>> I don't know anything about how Ceph behaves in that setup. Maybe
>>> somebody else can chime in?
>>>
>>> -- 
>>>
>>> *Craig Lewis*
>>> Senior Systems Engineer
>>> Office +1.714.602.1309
>>> Email clewis at centraldesktop.com <mailto:clewis at centraldesktop.com>
>>>
>>> *Central Desktop. Work together in ways you never thought possible.*
>>> Connect with us Website <http://www.centraldesktop.com/> | Twitter
>>> <http://www.twitter.com/centraldesktop> | Facebook
>>> <http://www.facebook.com/CentralDesktop> | LinkedIn
>>> <http://www.linkedin.com/groups?gid=147417> | Blog
>>> <http://cdblog.centraldesktop.com/>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
C?dric



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux