On Wed, 11 Sep 2019, Rishabh Dave wrote: > Hello, > > While working on CephFS Quick Start guide[1], the major issue that I > came across was choosing the value for pg_num for the pools that will > serve CephFS. I've tried the values from 4 to 128 for both data and > metadata pools and have always got "undersized+peered" instead of > "active+clean". Copying pg_num values from the cluster setup by > vstart.sh (8 for data and 16 for metadata pools) gave me the same > result. > > About the cluster: I had a single node running Fedora 29 with 1 MON, 1 > MGR, 1 MDS and 3 OSDs each with a disk size of 10 GB. Thinking that This is unrelated to the PGs or the capacity--the problem is that you have a single node, and the default CRUSH rule replicates across hosts. That's why your pools are unhealthy. You can fix this by creating a new crush rule with 'osd' instead of 'host' as the failure domain, and then setting your pool(s) to use that rule. osd crush rule create-replicated <name> <root> create crush rule <name> for replicated pool to <type> {<class>} start from <root>, replicate across buckets of type <type>, use devices of type <class> (ssd or hdd) osd pool set <poolname> crush_rule <rule-name> sage > disk size might have a role to play, I changed the number of OSDs to 2 > each with 20 GB disks and later with 50 GB disks but neither helped. I > used dnf to install ceph and ceph-deploy to setup the cluster. > > I've copied the the cluster status after every attempt here[2] in case > that helps. Any suggestions about pg_num values I should choose and on > the pg_num values that would be nice for a user looking forward to get > quickly started with CephFS? > > [1] https://docs.ceph.com/docs/master/start/quick-cephfs/ > [2] https://paste.fedoraproject.org/paste/Q-WH8VWtwu6JwF7eW2JmnA > > Thanks, > - Rishabh > _______________________________________________ > Dev mailing list -- dev@xxxxxxx > To unsubscribe send an email to dev-leave@xxxxxxx > > _______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx