health HEALTH_WARN too few pgs per osd (16 < min 20)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I got deploy my cluster with this commans.

mkdir "clustername"

cd "clustername"

ceph-deploy install mon1 mon2 mon3 mds1 mds2 mds3 osd200

ceph-deploy  new  mon1 mon2 mon3

ceph-deploy mon create  mon1 mon2 mon3

ceph-deploy gatherkeys  mon1 mon2 mon3

ceph-deploy osd prepare --fs-type ext4 osd200:/osd/osd1 osd200:/osd/osd2
osd200:/osd/osd3 osd200:/osd/osd4 osd200:/osd/osd5 osd200:/osd/osd6
osd200:/osd/osd7 osd200:/osd/osd8 osd200:/osd/osd9 osd200:/osd/osd10
osd200:/osd/osd11 osd200:/osd/osd12

ceph-deploy osd activate osd200:/osd/osd1 osd200:/osd/osd2 osd200:/osd/osd3
osd200:/osd/osd4 osd200:/osd/osd5 osd200:/osd/osd6 osd200:/osd/osd7
osd200:/osd/osd8 osd200:/osd/osd9 osd200:/osd/osd10 osd200:/osd/osd11
osd200:/osd/osd12


ceph-deploy admin mon1 mon2 mon3 mds1 mds2 mds3 osd200 salt1

ceph-deploy mds create mds1 mds2 mds3

but in the end...:

sudo ceph -s

[sm1ly at salt1 ceph]$ sudo ceph -s
    cluster 0b2c9c20-985a-4a39-af8e-ef2325234744
     health HEALTH_WARN 19 pgs degraded; 192 pgs stuck unclean; recovery
21/42 objects degraded (50.000%); too few pgs per osd (16 < min 20)
     monmap e1: 3 mons at {mon1=
10.60.0.110:6789/0,mon2=10.60.0.111:6789/0,mon3=10.60.0.112:6789/0},
election epoch 6, quorum 0,1,2 mon1,mon2,mon3
     mdsmap e6: 1/1/1 up {0=mds1=up:active}, 2 up:standby
     osdmap e61: 12 osds: 12 up, 12 in
      pgmap v103: 192 pgs, 3 pools, 9470 bytes data, 21 objects
            63751 MB used, 3069 GB / 3299 GB avail
            21/42 objects degraded (50.000%)
                 159 active
                  14 active+remapped
                  19 active+degraded


mon[123] and mds[123] are vms. osd200 - hardware server, cause on vms it
shows bad perfomance?

some searching talks me that the problem that I have only one osd node. can
I ignore it for tests?
another search talks me about pg groups, but I cant find how to get pgid.


-- 
yours respectfully, Alexander Vasin.

8 926 1437200
icq: 9906064
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140507/7bbc60ae/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux