Re: understanding % used in ceph df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, your question is more about MAX AVAIL value I think, see how Ceph calculates it  http://docs.ceph.com/docs/luminous/rados/operations/monitoring/#checking-a-cluster-s-usage-stats

One OSD getting full makes the pool full as well, so keep on nearfull OSDs reweighting .

Jakub


19 paź 2018 16:34 "Florian Engelmann" <florian.engelmann@xxxxxxxxxxxx> napisał(a):
Hi,


Our Ceph cluster is a 6 Node cluster each node having 8 disks. The
cluster is used for object storage only (right now). We do use EC 3+2 on
the buckets.data pool.

We had a problem with RadosGW segfaulting (12.2.5) till we upgraded to
12.2.8. We had almost 30.000 radosgw crashes leading to millions of
unreferenced objects (failed multiuploads?). It filled our cluster so
fast that we are now in danger to run out of space.

As you can see we are reweighting some OSDs right now. But the real
question is how "used" is calculated in ceph df.

Global: %RAW USED = 76.49%

while

x-1.rgw.buckets.data Used = 90.32%

Am I right this is because we should still be "able" to loose one OSD node?

If thats true - reweight can help just a little to rebalance the
capacity used on each node?

The only chance we have right now to survive until new HDDs arrive is to
delete objects, right?


ceph -s
   cluster:
     id:     a2222146-6561-307e-b032-xxxxxxxxxxxxx
     health: HEALTH_WARN
             3 nearfull osd(s)
             13 pool(s) nearfull
             1 large omap objects
             766760/180478374 objects misplaced (0.425%)

   services:
     mon:     3 daemons, quorum ceph1-mon3,ceph1-mon2,ceph1-mon1
     mgr:     ceph1-mon2(active), standbys: ceph1-mon1, ceph1-mon3
     osd:     36 osds: 36 up, 36 in; 24 remapped pgs
     rgw:     3 daemons active
     rgw-nfs: 2 daemons active

   data:
     pools:   13 pools, 1424 pgs
     objects: 36.10M objects, 115TiB
     usage:   200TiB used, 61.6TiB / 262TiB avail
     pgs:     766760/180478374 objects misplaced (0.425%)
              1400 active+clean
              16   active+remapped+backfill_wait
              8    active+remapped+backfilling

   io:
     client:   3.05MiB/s rd, 0B/s wr, 1.12kop/s rd, 37op/s wr
     recovery: 306MiB/s, 91objects/s

ceph df
GLOBAL:
     SIZE       AVAIL       RAW USED     %RAW USED
     262TiB     61.6TiB       200TiB         76.49
POOLS:
     NAME                        ID     USED        %USED     MAX AVAIL
    OBJECTS
     iscsi-images                1          35B         0       6.87TiB
           5
     .rgw.root                   2      3.57KiB         0       6.87TiB
          18
     x-1.rgw.buckets.data       6       115TiB     90.32       12.4TiB
   36090523
     x-1.rgw.control            7           0B         0       6.87TiB
          8
     x-1.rgw.meta               8       943KiB         0       6.87TiB
       3265
     x-1.rgw.log                9           0B         0       6.87TiB
        407
     x-1.rgw.buckets.index      12          0B         0       6.87TiB
       3096
     x-1.rgw.buckets.non-ec     13          0B         0       6.87TiB
       1623
     default.rgw.meta            14        373B         0       6.87TiB
           3
     default.rgw.control         15          0B         0       6.87TiB
           8
     default.rgw.log             16          0B         0       6.87TiB
           0
     scbench                     17          0B         0       6.87TiB
           0
     rbdbench                    18     1.00GiB      0.01       6.87TiB
         260



Regards,
Flo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux