Re: Scaling out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks heaps Nathan. That's what we thoughts and we wanted implement but I wanted to double check with the community.


Cheers


On Thu, Nov 21, 2019 at 2:42 PM Nathan Fish <lordcirth@xxxxxxxxx> wrote:
The default crush rule uses "host" as the failure domain, so in order
to deploy on one host you will need to make a crush rule that
specifies "osd". Then simply adding more hosts with osds will result
in automatic rebalancing. Once you have enough hosts to satisfy the
crush rule ( 3 for replicated size = 3) you can change the pool(s)
back to the default rule.

On Thu, Nov 21, 2019 at 7:46 AM Alfredo De Luca
<alfredo.deluca@xxxxxxxxx> wrote:
>
> Hi all.
> We are doing some tests on how to scale out nodes on Ceph Nautilus.
> Basically we want to try to install Ceph on one node and scale up to 2+ nodes. How to do so?
>
> Every nodes has 6 disks and maybe  we can use the crushmap to achieve this?
>
> Any thoughts/ideas/recommendations?
>
>
> Cheers
>
>
> --
> Alfredo
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Alfredo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux