Ceph not showing full capacity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have created a test Ceph cluster with Ceph Octopus using cephadm.

Cluster total RAW disk capacity is 262 TB but it's allowing to use of only
132TB.
I have not set quota for any of the pool. what could be the issue?

Output from :-
ceph -s
  cluster:
    id:     f8bc7682-0d11-11eb-a332-0cc47a5ec98a
    health: HEALTH_WARN
            clock skew detected on mon.strg-node3, mon.strg-node2
            2 backfillfull osd(s)
            4 pool(s) backfillfull
            1 pools have too few placement groups

  services:
    mon: 3 daemons, quorum strg-node1,strg-node3,strg-node2 (age 7m)
    mgr: strg-node3.jtacbn(active, since 7m), standbys: strg-node1.gtlvyv
    mds: cephfs-strg:1 {0=cephfs-strg.strg-node1.lhmeea=up:active} 1
up:standby
    osd: 48 osds: 48 up (since 7m), 48 in (since 5d)

  task status:
    scrub status:
        mds.cephfs-strg.strg-node1.lhmeea: idle

  data:
    pools:   4 pools, 289 pgs
    objects: 17.29M objects, 66 TiB
    usage:   132 TiB used, 130 TiB / 262 TiB avail
    pgs:     288 active+clean
             1   active+clean+scrubbing+deep

mounted volume shows
node1:/     67T   66T  910G  99% /mnt/cephfs
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux