Bulk storage use case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I think you're not getting many replies simply because those are rather 
large servers and not many have such hardware in prod.

We run with 24x3TB drives, 64GB ram, one 10Gbit NIC. Memory-wise there 
are no problems. Throughput-wise, the bottleneck is somewhere between 
the NIC (~1GB/s) and the HBA / SAS backplane (~1.6GB/s). Since writes 
coming in over the network are multiplied by at least 2 times to the 
disks, in our case the HBA is the bottleneck (so we have a practical 
limit of ~800-900MBps).

Regarding IOPS, spinning disks with co-located journals leave a lot to 
be desired. But for your use-case without RBD depending on low 
latencies, I don't think this will be a problem most of the time. Re: 
running without a journal .. is that even possible? (unless you use the 
KV store, which is experimental and doesn't really show a big speedup 
anyway).

The other factor which which makes it hard to judge your plan is how the 
erasure coding will perform, especially given only a 2Gig network 
between servers. I would guess there is very little prod experience with 
the EC code as of today -- and probably zero with boxes similar to what 
you propose. But my gut tells me that with your proposed stripe width of 
12/3, combined with the slow network, getting good performance might be 
a challenge.

I would suggest you start some smaller scale tests to get a feeling for 
the performance before committing to a large purchase of this hardware type.

Cheers, Dan

C?dric Lemarchand wrote:
> Thanks for your answers Craig, it seems this is a niche use case for
> Ceph, not a lot of replies from the ML.
>
> Cheers
>
> --
> C?dric Lemarchand
>
> Le 11 mai 2014 ? 00:35, Craig Lewis <clewis at centraldesktop.com
> <mailto:clewis at centraldesktop.com>> a ?crit :
>
>> On 5/10/14 12:43 , C?dric Lemarchand wrote:
>>> Hi Craig,
>>>
>>> Thanks, I really appreciate the well detailed response.
>>>
>>> I carefully note your advices, specifically about the CPU starvation
>>> scenario, which as you said sounds scary.
>>>
>>> About IO, datas will be very resilient, in case of crash, loosing not
>>> fully written objects will not be a problem (they will be re uploaded
>>> later), so I think in this specific case, disabling journaling could
>>> be a way to improve IO.
>>> How Ceph will handle that, are there caveats other than just loosing
>>> objects that was in the data path when the crash occurs ? I know it
>>> could sounds weird, but clients workflow could support such thing.
>>>
>>> Thanks !
>>>
>>> --
>>> C?dric Lemarchand
>>>
>>> Le 10 mai 2014 ? 04:30, Craig Lewis <clewis at centraldesktop.com
>>> <mailto:clewis at centraldesktop.com>> a ?crit :
>>
>> Disabling the journal does make sense in some cases, like all the data
>> is a backup copy.
>>
>> I don't know anything about how Ceph behaves in that setup. Maybe
>> somebody else can chime in?
>>
>> --
>>
>> *Craig Lewis*
>> Senior Systems Engineer
>> Office +1.714.602.1309
>> Email clewis at centraldesktop.com <mailto:clewis at centraldesktop.com>
>>
>> *Central Desktop. Work together in ways you never thought possible.*
>> Connect with us Website <http://www.centraldesktop.com/> | Twitter
>> <http://www.twitter.com/centraldesktop> | Facebook
>> <http://www.facebook.com/CentralDesktop> | LinkedIn
>> <http://www.linkedin.com/groups?gid=147417> | Blog
>> <http://cdblog.centraldesktop.com/>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com <mailto:ceph-users at lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux