Re: right pg_num value for CephFS Quick Start guide

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 14 Sep 2019 at 00:46, JC Lopez <jelopez@xxxxxxxxxx> wrote:
>
> Hi,
>
> if you have the proper setup you should always reach active+clean for all your PGs,
>
> - Single node with 2 OSDs: Rule replicates across OSD, set size=2 and min_size=1 on your pool
> - Single node with 3 OSDs: Rule replicates across OSD (default will be size=3 min_size=2 on your pool)
> - Multiple nodes with 2 nodes: Rule replicates across HOST, set size=2 and min_size=1 on your pool
> - Multiple nodes with3 nodes: Rule replicates across HOST (default will be size=3 min_size=2 on your pool)
>
> Tip: For single node deployment set osd_crush_chooseleaf_type = 0 in your configuration file [global] section before you deploy your MONs and OSDs and it will create the correct CRUSH rule

I get a perfect cluster status now. I can see HEALTH_OK as well
"active+clean" for all PGs. Thank you!
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux