issues with ceph nautilus version

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Recently I have upgraded my ceph cluster to version 14.0.0 - nautilus(dev) from ceph version 13.0.1, after this, I noticed some weird data usage numbers on the cluster.

Here are the issues I’m seeing…

  1. The data usage reported is much more than what is available

usage:   16 EiB used, 164 TiB / 158 TiB avail

 

before this upgradation, it used to report properly

usage:   1.10T used, 157T / 158T avail

 

  1. it reports that all the osds/pool are full

 

Can someone please shed some light? Any helps is greatly appreciated.

 

[root@hadoop1 my-ceph]# ceph --version

ceph version 14.0.0-480-g6c1e8ee (6c1e8ee14f9b25dc96684dbc1f8c8255c47f0bb9) nautilus (dev)

 

[root@hadoop1 my-ceph]# ceph -s

  cluster:

    id:     ee4660fd-167b-42e6-b27b-126526dab04d

    health: HEALTH_ERR

            87 full osd(s)

            11 pool(s) full

 

  services:

    mon: 3 daemons, quorum hadoop1,hadoop4,hadoop6

    mgr: hadoop6(active), standbys: hadoop1, hadoop4

    mds: cephfs-1/1/1 up  {0=hadoop3=up:creating}, 2 up:standby

    osd: 88 osds: 87 up, 87 in

 

  data:

    pools:   11 pools, 32588 pgs

    objects: 0  objects, 0 B

    usage:   16 EiB used, 164 TiB / 158 TiB avail

    pgs:     32588 active+clean

 

Thanks in advance

-Raj

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux