Re: right pg_num value for CephFS Quick Start guide

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 11 Sep 2019 at 23:13, Vasu Kulkarni <vakulkar@xxxxxxxxxx> wrote:
>
>
>
> On Wed, Sep 11, 2019 at 10:38 AM Rishabh Dave <ridave@xxxxxxxxxx> wrote:
>>
>> Hello,
>>
>> While working on CephFS Quick Start guide[1], the major issue that I
>> came across was choosing the value for pg_num for the pools that will
>> serve CephFS. I've tried the values from 4 to 128 for both data and
>> metadata pools and have always got "undersized+peered" instead of
>> "active+clean". Copying pg_num values from the cluster setup by
>> vstart.sh (8 for data and 16 for metadata pools) gave me the same
>> result.
>>
>> About the cluster: I had a single node running Fedora 29 with 1 MON, 1
>> MGR, 1 MDS and 3 OSDs each with a disk size of 10 GB. Thinking that
>> disk size might have a role to play, I changed the number of OSDs to 2
>> each with 20 GB disks and later with 50 GB disks but neither helped. I
>> used dnf to install ceph and ceph-deploy to setup the cluster.
>>
>> I've copied the the cluster status after every attempt here[2] in case
>> that helps. Any suggestions about pg_num values I should choose and on
>> the pg_num values that would be nice for a user looking forward to get
>> quickly started with CephFS?
>
> Why not recommend this in quick-start for master or from nautilus stable?
> https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/
>

Although, I could get "active+clean" for all PGs but not HEALTH_OK,
it'll mention be great to mention this anyway. Thanks for pointing
out.
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux