Re: Use 2 osds to create cluster but health check display "active+degraded"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi.
This parameter does not apply to pools by default.
ceph osd dump | grep pool. see size=?


2014-10-29 11:40 GMT+03:00 Vickie CH <mika.leaf666@xxxxxxxxx>:
Der Irek:

Thanks for your reply.
Even already set "osd_pool_default_size = 2" the cluster still need 3 different hosts right?
Is this default number can be changed by user and write into ceph.conf before deploy?


Best wishes,
Mika

2014-10-29 16:29 GMT+08:00 Irek Fasikhov <malmyzh@xxxxxxxxx>:
Hi.

Because the disc requires three different hosts, the default number of replications 3.

2014-10-29 10:56 GMT+03:00 Vickie CH <mika.leaf666@xxxxxxxxx>:
Hi all,
      Try to use two OSDs to create a cluster. After the deply finished, I found the health status is "88 active+degraded" "104 active+remapped". Before use 2 osds to create cluster the result is ok. I'm confuse why this situation happened. Do I need to set crush map to fix this problem?


----------ceph.conf---------------------------------
[global]
fsid = c404ded6-4086-4f0b-b479-89bc018af954
mon_initial_members = storage0
mon_host = 192.168.1.10
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 128
osd_journal_size = 2048
osd_pool_default_pgp_num = 128
osd_mkfs_type = xfs
---------------------------------------------------------

-----------ceph -s-----------------------------------
cluster c404ded6-4086-4f0b-b479-89bc018af954
     health HEALTH_WARN 88 pgs degraded; 192 pgs stuck unclean
     monmap e1: 1 mons at {storage0=192.168.10.10:6789/0}, election epoch 2, quorum 0 storage0
     osdmap e20: 2 osds: 2 up, 2 in
      pgmap v45: 192 pgs, 3 pools, 0 bytes data, 0 objects
            79752 kB used, 1858 GB / 1858 GB avail
                  88 active+degraded
                 104 active+remapped
--------------------------------------------------------


Best wishes,
Mika

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757




--
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux