Re: What a maximum theoretical and practical capacity in ceph cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/27/2014 05:32 PM, Dan van der Ster wrote:
> Hi,
> 
> October 27 2014 5:07 PM, "Wido den Hollander" <wido@xxxxxxxx> wrote: 
>> On 10/27/2014 04:30 PM, Mike wrote:
>>
>>> Hello,
>>> My company is plaining to build a big Ceph cluster for achieving and
>>> storing data.
>>> By requirements from customer - 70% of capacity is SATA, 30% SSD.
>>> First day data is storing in SSD storage, on next day moving SATA storage.
>>
>> How are you planning on moving this data? Do you expect Ceph to do this?
>>
>> What kind of access to Ceph are you planning on using? RBD? Raw RADOS?
>> The RADOS Gateway (S3/Swift)?
>>
>>> By now we decide use a SuperMicro's SKU with 72 bays for HDD = 22 SSD +
>>> 50 SATA drives.
>>
>> That are some serious machines. It will require a LOT of CPU power in
>> those machines to run 72 OSDs. Probably 4 CPUs per machine.
>>
>>> Our racks can hold 10 this servers and 50 this racks in ceph cluster =
>>> 36000 OSD's,
>>
>> 36.000 OSDs shouldn't really be the problem, but you are thinking really
>> big scale here.
>>
> 
> AFAIK, the OSDs should scale, since they only peer with ~100 others regardless of the cluster size. I wonder about the mon's though -- 36,000 OSDs will send a lot of pg_stats updates so the mon's will have some work to keep up. But the main issue I foresee is on the clients: don't be surprised when you see that each client needs close to 100k threads when connected to this cluster. A hypervisor with 10 VMs running would approach 1 million threads -- I have no idea if that will present any problems. There were discussions about limiting the number of client threads, but I don't know if there was any progress on that yet.
> 

True about the mons. 3 monitors will not cut it here. You need 9 MONs at
least I think, on dedicated resources.

> Anyway, it would be good to know if there are any current installations even close to this size (even in test). We are in the early days of planning a 10k OSD test, but haven't exceed ~1200 yet.
> 
> Cheers, Dan
> 
> 
>>> With 4tb SATA drives and replica = 2 and nerfull ratio = 0.8 we have 40
>>> Petabyte of useful capacity.
>>>
>>> It's too big or normal use case for ceph?
>>
>> No, it's not to big for Ceph. This is what it was designed for. But a
>> setup like this shouldn't be taken lightly.
>>
>> Think about the network connectivity required to connect all these
>> machines and other decisions to be made.
>>
>> _______________________________________________ 
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> --
>> Wido den Hollander
>> 42on B.V.
>> Ceph trainer and consultant
>>
>> Phone: +31 (0)20 700 9902
>> Skype: contact42on
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux