I removed cephfs and its pools, created everything again using the default crush ruleset, which is for the HDD, and now ceph health is OK.
I appreciate your help. Thank you very much.
I appreciate your help. Thank you very much.
On Tue, Nov 15, 2016 at 11:48 AM Webert de Souza Lima <webert.boss@xxxxxxxxx> wrote:
Right, thank you.
On this particular cluster it would be Ok to have everything on the HDD. No big traffic here.
In order to do that, do I need to delete this cephfs, delete its pools and create them again?
After that I assume I would run ceph osd pool set cephfs_metadata crush_ruleset 0, as 0 is the id of the hdd crush rule.On Tue, Nov 15, 2016 at 11:09 AM Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:Hi,
On 11/15/2016 01:55 PM, Webert de Souza Lima wrote:
sure, as requested:
cephfs was created using the following command:
ceph osd pool create cephfs_metadata 128 128ceph osd pool create cephfs_data 128 128ceph fs new cephfs cephfs_metadata cephfs_data
ceph.conf:
https://paste.debian.net/895841/
# ceph osd crush tree
https://paste.debian.net/895839/
# ceph osd crush rule list["replicated_ruleset","replicated_ruleset_ssd"]
# ceph osd crush rule dumpI assume that you want the cephfs_metadata pool to be located on the SSD.
The crush rule uses host based distribution, but there are only two hosts available. The default replicated rule uses osd based distribution, that's why the other pools aren't affected.
You have configured the default number of replicates to 3, so the ssd rule cannot be satisfied with two host. You either need to put the metadata pool on the HDD, too, or use a pool size of 2 (which is not recommended).
Regards,
Burkhard_______________________________________________-- Dr. rer. nat. Burkhard Linke Bioinformatics and Systems Biology Justus-Liebig-University Giessen 35392 Giessen, Germany Phone: (+49) (0)641 9935810
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com