Re: a big cluster or several small

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


>>Our main reason for using multiple clusters is that Ceph has a bad 
>>reliability history when scaling up and even now there are many issues 
>>unresolved (https://tracker.ceph.com/issues/21761 for example) so by 
>>dividing single, large cluster into few smaller ones, we reduce the impact 
>>for customers when things go fatally wrong - when one cluster goes down or 
>>it's performance is on single ESDI drive level due to recovery, other 
>>clusters - and their users - are unaffected. For us this already proved 
>>useful in the past.


we are also doing multiple small clusters here. (3 nodes, 18 osd (ssd or nvme))
mainly vms and rbd, so it's not a problem.

Mainly to avoid lags for all clients when a osd goes down for example, or make upgrade more easy.


We only have a bigger cluster for radosgw and object storage.


Alexandre


----- Mail original -----
De: "Piotr Dałek" <piotr.dalek@xxxxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mardi 15 Mai 2018 09:14:53
Objet: Re:  a big cluster or several small

On 18-05-14 06:49 PM, Marc Boisis wrote: 
> 
> Hi, 
> 
> Hello, 
> Currently we have a 294 OSD (21 hosts/3 racks) cluster with RBD clients 
> only, 1 single pool (size=3). 
> 
> We want to divide this cluster into several to minimize the risk in case of 
> failure/crash. 
> For example, a cluster for the mail, another for the file servers, a test 
> cluster ... 
> Do you think it's a good idea ? 

If reliability and data availability is your main concern, and you don't 
share data between clusters - yes. 

> Do you have experience feedback on multiple clusters in production on the 
> same hardware: 
> - containers (LXD or Docker) 
> - multiple cluster on the same host without virtualization (with ceph-deploy 
> ... --cluster ...) 
> - multilple pools 
> ... 
> 
> Do you have any advice? 

We're using containers to host OSDs, but we don't host multiple clusters on 
same machine (in other words, single physical machine hosts containers for 
one and the same cluster). We're using Ceph for RBD images, so having 
multiple clusters isn't a problem for us. 

Our main reason for using multiple clusters is that Ceph has a bad 
reliability history when scaling up and even now there are many issues 
unresolved (https://tracker.ceph.com/issues/21761 for example) so by 
dividing single, large cluster into few smaller ones, we reduce the impact 
for customers when things go fatally wrong - when one cluster goes down or 
it's performance is on single ESDI drive level due to recovery, other 
clusters - and their users - are unaffected. For us this already proved 
useful in the past. 

-- 
Piotr Dałek 
piotr.dalek@xxxxxxxxxxxx 
https://www.ovhcloud.com 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux