On Wed, Sep 11, 2019 at 10:38 AM Rishabh Dave <ridave@xxxxxxxxxx> wrote:
Hello,
While working on CephFS Quick Start guide[1], the major issue that I
came across was choosing the value for pg_num for the pools that will
serve CephFS. I've tried the values from 4 to 128 for both data and
metadata pools and have always got "undersized+peered" instead of
"active+clean". Copying pg_num values from the cluster setup by
vstart.sh (8 for data and 16 for metadata pools) gave me the same
result.
About the cluster: I had a single node running Fedora 29 with 1 MON, 1
MGR, 1 MDS and 3 OSDs each with a disk size of 10 GB. Thinking that
disk size might have a role to play, I changed the number of OSDs to 2
each with 20 GB disks and later with 50 GB disks but neither helped. I
used dnf to install ceph and ceph-deploy to setup the cluster.
I've copied the the cluster status after every attempt here[2] in case
that helps. Any suggestions about pg_num values I should choose and on
the pg_num values that would be nice for a user looking forward to get
quickly started with CephFS?
Why not recommend this in quick-start for master or from nautilus stable?
[1] https://docs.ceph.com/docs/master/start/quick-cephfs/
[2] https://paste.fedoraproject.org/paste/Q-WH8VWtwu6JwF7eW2JmnA
Thanks,
- Rishabh
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx