Re: too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And now :

ceph status
    cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
     health HEALTH_OK
     monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
            election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2
     osdmap e21: 6 osds: 6 up, 6 in
            flags sortbitwise,require_jewel_osds
      pgmap v60: 160 pgs, 1 pools, 0 bytes data, 0 objects
            30924 MB used, 22194 GB / 22225 GB avail
                 160 active+clean

Thanks all is perfect !

2017-06-14 17:00 GMT+02:00 Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx>:
And now:

2017-06-14 17:00 GMT+02:00 Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx>:
Ok, I missed:

 ceph osd pool set rbd pgp_num 160

Now I have:

 ceph status
    cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
     health HEALTH_ERR
            9 pgs are stuck inactive for more than 300 seconds
            9 pgs stuck inactive
            9 pgs stuck unclean
     monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
            election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2
     osdmap e21: 6 osds: 6 up, 6 in
            flags sortbitwise,require_jewel_osds
      pgmap v50: 160 pgs, 1 pools, 0 bytes data, 0 objects
            30925 MB used, 22194 GB / 22225 GB avail
                 143 active+clean
                  17 activating

2017-06-14 16:56 GMT+02:00 Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx>:
2017-06-14 16:40 GMT+02:00 David Turner <drakonstein@xxxxxxxxx>:
Once those PG's have finished creating and the cluster is back to normal

How can I see Cluster migration progression?

Now I have:

# ceph status
    cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
     health HEALTH_WARN
            pool rbd pg_num 160 > pgp_num 64
     monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
            election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2
     osdmap e19: 6 osds: 6 up, 6 in
            flags sortbitwise,require_jewel_osds
      pgmap v45: 160 pgs, 1 pools, 0 bytes data, 0 objects
            30923 MB used, 22194 GB / 22225 GB avail
                 160 active+clean




--



--



--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux