PGs degraded with 3 MONs and 1 OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I just would like to clarify if I should expect degraded PGs with 11 OSD in one node. I am not sure if a setup with 3 MON and 1 OSD (11 disks) nodes allows me to have healthy cluster.

$ sudo ceph osd pool create test 512
pool 'test' created

$ sudo ceph status
    cluster 4e77327a-118d-450d-ab69-455df6458cd4
health HEALTH_WARN 512 pgs degraded; 512 pgs stuck unclean; 512 pgs undersized monmap e1: 3 mons at {ceph1=172.16.41.31:6789/0,ceph2=172.16.41.32:6789/0,ceph3=172.16.41.33:6789/0}, election epoch 36, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e190: 11 osds: 11 up, 11 in
      pgmap v342: 512 pgs, 1 pools, 0 bytes data, 0 objects
            53724 kB used, 9709 GB / 9720 GB avail
                 512 active+undersized+degraded

$ sudo ceph osd tree
# id    weight  type name       up/down reweight
-1      9.45    root default
-2      9.45            host ceph4
0       0.45                    osd.0   up      1
1       0.9                     osd.1   up      1
2       0.9                     osd.2   up      1
3       0.9                     osd.3   up      1
4       0.9                     osd.4   up      1
5       0.9                     osd.5   up      1
6       0.9                     osd.6   up      1
7       0.9                     osd.7   up      1
8       0.9                     osd.8   up      1
9       0.9                     osd.9   up      1
10      0.9                     osd.10  up      1


Thank you,
Jiri
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux