Re: Ceph free space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for reply,

>>ceph df

GLOBAL:

    SIZE       AVAIL     RAW USED     %RAW USED

    13910G     2472G       11437G         82.22

POOLS:

    NAME     ID     USED      %USED     MAX AVAIL     OBJECTS

    rbd      0      3792G     27.26          615G      971526

 

How to free raw used space?

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Henrik Korkuc
Sent: Tuesday, March 10, 2015 10:13 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Ceph free space

 

On 3/10/15 11:06, Mateusz Skała wrote:

Hi,

In my cluster is something wrong with free space. In cluster with 10OSD (5*1TB + 5*2TB) ‘ceph –s’ shows:

11425 GB used, 2485 GB / 13910 GB avail

But I have only 2 rbd disks in one pool (‘rbd’):

>>rados df

pool name       category                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB

rbd             -                 3976154023       971434            0         6474           0     11542224   1391869743       742847    385900453

  total used     11988041672       971434

  total avail     2598378648

  total space    14586420320

 

>>rbd ls

vm-100-disk-1

vm-100-disk-2

 

>>rbd info vm-100-disk-1

rbd image 'vm-100-disk-1':

        size 16384 MB in 4096 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.14ef2ae8944a

        format: 2

        features: layering

 

>>rbd info vm-100-disk-2

rbd image 'vm-100-disk-2':

        size 4096 GB in 1048576 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.15682ae8944a

        format: 2

        features: layering

 

So my rbd disks use only 4112GB. Default size of cluster is 2 so used space should be ca 8224GB, why ceph –s shows 11425 GB ?

 

 

Can someone explain this situation?

Thanks, Mateusz

 

Hey,

what does "ceph df" show?

ceph -s shows raw disk usage so there will be some overhead from file system, also maybe you left some files there?






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux