Re: Can't recover pgs degraded/stuck unclean/undersized

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


On 11/15/2016 01:55 PM, Webert de Souza Lima wrote:
sure, as requested:

cephfs was created using the following command:

ceph osd pool create cephfs_metadata 128 128
ceph osd pool create cephfs_data 128 128
ceph fs new cephfs cephfs_metadata cephfs_data

ceph.conf:
https://paste.debian.net/895841/

# ceph osd crush tree
https://paste.debian.net/895839/

# ceph osd crush rule list
[
    "replicated_ruleset",
    "replicated_ruleset_ssd"
]

# ceph osd crush rule dump

I assume that you want the cephfs_metadata pool to be located on the SSD.

The crush rule uses host based distribution, but there are only two hosts available. The default replicated rule uses osd based distribution, that's why the other pools aren't affected.

You have configured the default number of replicates to 3, so the ssd rule cannot be satisfied with two host. You either need to put the metadata pool on the HDD, too, or use a pool size of 2 (which is not recommended).

Regards,
Burkhard
-- 
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germany
Phone: (+49) (0)641 9935810
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux