right pg_num value for CephFS Quick Start guide

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

While working on CephFS Quick Start guide[1], the major issue that I
came across was choosing the value for pg_num for the pools that will
serve CephFS. I've tried the values from 4 to 128 for both data and
metadata pools and have always got "undersized+peered" instead of
"active+clean". Copying pg_num values from the cluster setup by
vstart.sh (8 for data and 16 for metadata pools) gave me the same
result.

About the cluster: I had a single node running Fedora 29 with 1 MON, 1
MGR, 1 MDS and 3 OSDs each with a disk size of 10 GB. Thinking that
disk size might have a role to play, I changed the number of OSDs to 2
each with 20 GB disks and later with 50 GB disks but neither helped. I
used dnf to install ceph and ceph-deploy to setup the cluster.

I've copied the the cluster status after every attempt here[2] in case
that helps. Any suggestions about pg_num values I should choose and on
the pg_num values that would be nice for a user looking forward to get
quickly started with CephFS?

[1] https://docs.ceph.com/docs/master/start/quick-cephfs/
[2] https://paste.fedoraproject.org/paste/Q-WH8VWtwu6JwF7eW2JmnA

Thanks,
- Rishabh
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux