Re: inconsistent number of pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, have you tried 'ceph health detail'?


Zitat von Lars Täuber <taeuber@xxxxxxx>:

Hi everybody,

with the status report I get a HEALTH_WARN I don't know how to get rid of.
It my be connected to recently removed pools.

# ceph -s
  cluster:
    id:     6cba13d1-b814-489c-9aac-9c04aaf78720
    health: HEALTH_WARN
            1 pools have many more objects per pg than average

  services:
    mon: 3 daemons, quorum mon1,mon2,mon3 (age 4h)
    mgr: mon1(active, since 4h), standbys: cephsible, mon2, mon3
    mds: cephfs_1:1 {0=mds3=up:active} 2 up:standby
    osd: 30 osds: 30 up (since 2h), 30 in (since 7w)

  data:
    pools:   5 pools, 1029 pgs
    objects: 315.51k objects, 728 GiB
    usage:   4.6 TiB used, 163 TiB / 167 TiB avail
    pgs:     1029 active+clean


!!! but:
# ceph osd lspools | wc -l
3

The status says there are 5 pools but the listing says there are only 3.
How to I get to know which pool is the reason for the health warning?

Thanks
Lars
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux