Re: rbd space usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ha, that was your issue

RBD does not know that your space (on the filesystem level) is now free
to use

You have to trim your filesystem, see fstrim(8) as well as the discard
mount option

The related scsi command have to be passed down the stack, so you may
need to check on other level (for instance, your hypervisor's configuration)

Regards,

On 02/28/2019 11:31 PM, solarflow99 wrote:
> yes, but:
> 
> # rbd showmapped
> id pool image snap device
> 0  rbd  nfs1  -    /dev/rbd0
> 1  rbd  nfs2  -    /dev/rbd1
> 
> 
> # df -h
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/rbd0       8.0T  4.8T  3.3T  60% /mnt/nfsroot/rbd0
> /dev/rbd1       9.8T   34M  9.8T   1% /mnt/nfsroot/rbd1
> 
> 
> only 5T is taken up
> 
> 
> On Thu, Feb 28, 2019 at 2:26 PM Jack <ceph@xxxxxxxxxxxxxx> wrote:
> 
>> Are not you using 3-replicas pool ?
>>
>> (15745GB + 955GB + 1595M) * 3 ~= 51157G (there is overhead involved)
>>
>> Best regards,
>>
>> On 02/28/2019 11:09 PM, solarflow99 wrote:
>>> thanks, I still can't understand whats taking up all the space 27.75
>>>
>>> On Thu, Feb 28, 2019 at 7:18 AM Mohamad Gebai <mgebai@xxxxxxx> wrote:
>>>
>>>> On 2/27/19 4:57 PM, Marc Roos wrote:
>>>>> They are 'thin provisioned' meaning if you create a 10GB rbd, it does
>>>>> not use 10GB at the start. (afaik)
>>>>
>>>> You can use 'rbd -p rbd du' to see how much of these devices is
>>>> provisioned and see if it's coherent.
>>>>
>>>> Mohamad
>>>>
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: solarflow99 [mailto:solarflow99@xxxxxxxxx]
>>>>> Sent: 27 February 2019 22:55
>>>>> To: Ceph Users
>>>>> Subject:  rbd space usage
>>>>>
>>>>> using ceph df it looks as if RBD images can use the total free space
>>>>> available of the pool it belongs to, 8.54% yet I know they are created
>>>>> with a --size parameter and thats what determines the actual space.  I
>>>>> can't understand the difference i'm seeing, only 5T is being used but
>>>>> ceph df shows 51T:
>>>>>
>>>>>
>>>>> /dev/rbd0       8.0T  4.8T  3.3T  60% /mnt/nfsroot/rbd0
>>>>> /dev/rbd1       9.8T   34M  9.8T   1% /mnt/nfsroot/rbd1
>>>>>
>>>>>
>>>>>
>>>>> # ceph df
>>>>> GLOBAL:
>>>>>     SIZE     AVAIL     RAW USED     %RAW USED
>>>>>     180T      130T       51157G         27.75
>>>>> POOLS:
>>>>>     NAME                    ID     USED       %USED     MAX AVAIL
>>>>> OBJECTS
>>>>>     rbd                     0      15745G      8.54        39999G
>>>>> 4043495
>>>>>     cephfs_data             1           0         0        39999G
>>>>>     0
>>>>>     cephfs_metadata         2        1962         0        39999G
>>>>>    20
>>>>>     spider_stage     9       1595M         0        39999G        47835
>>>>>     spider               10       955G      0.52        39999G
>>>>> 42541237
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux