Hi all,
I have set up a cluster for use with cephfs. Trying to follow the recommendations for the MDS service, I picked two machines which provide SSD-based
disk space, 2 TB each, to put the cephfs- metadata pool there.
My ~20 HDD-based OSDs in the cluster have 43 TB each.
I created a crush rule tied to this MDS-hardware and then the metadata pool by specifying the rule name, mds-ssd,
ceph osd pool create metadata0 128 128 replicated mds-ssd
whereas the data pool was just created as standard replicated pool.
cephfs creation seemed to work with these, but now the system is stuck with
> pgs: 22/72 objects degraded (30.556%)
> 513 active+clean
> 110 active+undersized
> 18 active+undersized+degraded
What is the main reason here? I can think of these:
1. There are just two OSDs for the metadata pool - a replicated pool without further tweaks would need three OSDs/hosts?
2. Ceph might have placed the metadata pool onto the said OSDs, but considers them still valid targets for other pools, hence tries to reconcile OSDs
of 2TB and 43TB and fails?
Btw, how can I change the default failure domain? osd, host, whatever?
This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did not find the command to inject my failure domain into the config database...
Regards
Thomas
--
--------------------------------------------------------------------
Thomas Roth IT-HPC-Linux
Location: SB3 2.291 Phone: 1453
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx