Help with pgs undersized+degraded+peered

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have installed ceph (0.94.2 using ceph-deploy utility.. I have created three VM with Ubuntu 14.04, ceph01, ceph02 and ceph03, each one has 3 OSD daemons, and 1 mon, ceph01 also has ceph-deploy.


I need help, because I have read the online docs and try many things , but I didn't find why my cluster status is always warning, regardless of the number of PG's defined it are in state undersized+degraded+peered.


Here is the way I did the cluster:

root@ceph01:~# mkdir /opt/ceph
root@ceph01:~# cd /opt/ceph
root@ceph01:/opt/ceph# ceph-deploy new ceph01
root@ceph01:/opt/ceph# ceph-deploy install ceph01 ceph02 ceph03
root@ceph01:/opt/ceph# ceph-deploy mon create-initial
root@ceph01:/opt/ceph# ceph-deploy disk zap ceph01:vdc ceph02:vdc ceph03:vdc ceph01:vdd ceph02:vdd ceph03:vdd  ceph01:vde ceph02:vde ceph03:vde
root@ceph01:/opt/ceph# ceph-deploy osd create ceph01:vdc ceph02:vdc ceph03:vdc ceph01:vdd ceph02:vdd ceph03:vdd  ceph01:vde ceph02:vde ceph03:vde

root@ceph01:/opt/ceph# ceph-deploy mon add ceph02
root@ceph01:/opt/ceph# ceph-deploy mon add ceph03


root@ceph01:/opt/ceph# ceph status
    cluster d54a2216-b522-4744-a7cc-a2106e1281b6
     health HEALTH_WARN
            64 pgs degraded
            64 pgs stuck degraded
            64 pgs stuck inactive
            64 pgs stuck unclean
            64 pgs stuck undersized
            64 pgs undersized
            too few PGs per OSD (7 < min 30)
     monmap e3: 3 mons at {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
            election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
     osdmap e29: 9 osds: 9 up, 9 in
      pgmap v49: 64 pgs, 1 pools, 0 bytes data, 0 objects
            296 MB used, 45684 MB / 45980 MB avail
                  64 undersized+degraded+peered

root@ceph01:/opt/ceph# ceph osd lspools
0 rbd,

root@ceph01:/opt/ceph# ceph osd pool get rbd size
size: 3

root@ceph01:/opt/ceph# ceph osd pool set rbd size 2
set pool 0 size to 2

root@ceph01:/opt/ceph# ceph osd pool set rbd min_size 1
set pool 0 min_size to 1

root@ceph01:/opt/ceph# ceph status
    cluster d54a2216-b522-4744-a7cc-a2106e1281b6
     health HEALTH_WARN
            64 pgs degraded
            64 pgs stuck degraded
            64 pgs stuck inactive
            64 pgs stuck unclean
            64 pgs stuck undersized
            64 pgs undersized
            too few PGs per OSD (7 < min 30)
     monmap e3: 3 mons at {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
            election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
     osdmap e30: 9 osds: 9 up, 9 in
      pgmap v52: 64 pgs, 1 pools, 0 bytes data, 0 objects
            296 MB used, 45684 MB / 45980 MB avail
                  64 undersized+degraded+peered


If I try to increase pg_num as documentation recommend;

root@ceph01:/opt/ceph# ceph osd pool set rbd pg_num 512
Error E2BIG: specified pg_num 512 is too large (creating 448 new PGs on ~9 OSDs exceeds per-OSD max of 32)

Then I set the ph_num = 280

root@ceph01:/opt/ceph# ceph osd pool set rbd pg_num 280
set pool 0 pg_num to 280

root@ceph01:/opt/ceph# ceph osd pool set rbd pgp_num 280
set pool 0 pgp_num to 280


root@ceph01:/opt/ceph# ceph status
    cluster d54a2216-b522-4744-a7cc-a2106e1281b6
     health HEALTH_WARN
            280 pgs degraded
            280 pgs stuck unclean
            280 pgs undersized
     monmap e3: 3 mons at {ceph01=172.16.70.158:6789/0,ceph02=172.16.70.159:6789/0,ceph03=172.16.70.160:6789/0}
            election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
     osdmap e37: 9 osds: 9 up, 9 in
      pgmap v100: 280 pgs, 1 pools, 0 bytes data, 0 objects
            301 MB used, 45679 MB / 45980 MB avail
                 280 active+undersized+degraded




How can I get PG's in active + clean state? maybe the online documentation is too old?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux