Re: some ceph general questions about the design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Harald,

- then i build a 3 node osd cluster and passtrough all disks and i install
the mgr  daemon on them
- i build 3 seperate mon server and install here the rgw? right?

As other friends suggested you can use VMs for mgr, mon and rgw, they are
not so IOPS intensive and they are very flexible as well, you can easily
replace/displace/move/add them in the future.
The fact that if you can install them on the same machine depends on your
load and level of complexity, the more nodes you have the more complex
infra you will have to manage. But separating them would make it more
secure and easier to troubleshoot in case of disasters (God forbid!).

- osd crush chooseleaf type - can i set here 3? if the hosts are in
different racks? is that recomended?
I don't know honestly, but I think changing choose and chooseleaf have some
perquisites and specific consideration in cluster. Better to read and ask
more about it.
- osd pool default size -  can i set here 2 ? for ex if i need to
maintainance a osd and need to shutdown
AFAIK default pool size below 3 is not recommended, if you want to do
maintenance on some OSDs you can mark them as out and do your job.

Thanks,
Khodayar


On Tue, Apr 21, 2020 at 12:14 AM <harald.freidhof@xxxxxxxxx> wrote:

> ok thx for the answers
>
> we will connect later nearly 20 kvm hosts on the ceph cluster
>
> - then i build a 3 node osd cluster and passtrough all disks and i install
> the mgr  daemon on them
> - i build 3 seperate mon server and install here the rgw? right?
>
> a few question to the options:
> - osd crush chooseleaf type - can i set here 3? if the hosts are in
> different racks? is that recomended?
> - osd pool default size -  can i set here 2 ? for ex if i need to
> maintainance a osd and need to shutdown?
>
> thx again in advance for you answers
> hfreidhof
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux