Re: too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

see comments below.

JC
On Jun 14, 2017, at 07:23, Stéphane Klein <contact@xxxxxxxxxxxxxxxxxxx> wrote:

Hi,

I have this parameter in my Ansible configuration:

pool_default_pg_num: 300 # (100 * 6) / 2 = 300

But I have this error:

# ceph status
    cluster 800221d2-4b8c-11e7-9bb9-cffc42889917
     health HEALTH_ERR
            73 pgs are stuck inactive for more than 300 seconds
            22 pgs degraded
            9 pgs peering
            64 pgs stale
            22 pgs stuck degraded
            9 pgs stuck inactive
            64 pgs stuck stale
            31 pgs stuck unclean
            22 pgs stuck undersized
            22 pgs undersized
            too few PGs per OSD (16 < min 30)
     monmap e1: 2 mons at {ceph-storage-rbx-1=172.29.20.30:6789/0,ceph-storage-rbx-2=172.29.20.31:6789/0}
            election epoch 4, quorum 0,1 ceph-storage-rbx-1,ceph-storage-rbx-2
     osdmap e41: 12 osds: 6 up, 6 in; 8 remapped pgs
            flags sortbitwise,require_jewel_osds
      pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects
As this line shows you only have 64 pgs in your cluster so far hence the warning. This parameter must be positioned before you deploy your cluster or before you create your first pool.
            30919 MB used, 22194 GB / 22225 GB avail
                  33 stale+active+clean
                  22 stale+active+undersized+degraded
                   9 stale+peering

I have 2 hosts with 3 partitions, then 3 x 2 OSD ?

Why 16 < min 30 ? I set 300 pg_num

Best regards,
Stéphane
--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux