thanks, I still can't understand whats taking up all the space
27.75
On Thu, Feb 28, 2019 at 7:18 AM Mohamad Gebai <mgebai@xxxxxxx> wrote:
On 2/27/19 4:57 PM, Marc Roos wrote:
> They are 'thin provisioned' meaning if you create a 10GB rbd, it does
> not use 10GB at the start. (afaik)
You can use 'rbd -p rbd du' to see how much of these devices is
provisioned and see if it's coherent.
Mohamad
>
>
> -----Original Message-----
> From: solarflow99 [mailto:solarflow99@xxxxxxxxx]
> Sent: 27 February 2019 22:55
> To: Ceph Users
> Subject: rbd space usage
>
> using ceph df it looks as if RBD images can use the total free space
> available of the pool it belongs to, 8.54% yet I know they are created
> with a --size parameter and thats what determines the actual space. I
> can't understand the difference i'm seeing, only 5T is being used but
> ceph df shows 51T:
>
>
> /dev/rbd0 8.0T 4.8T 3.3T 60% /mnt/nfsroot/rbd0
> /dev/rbd1 9.8T 34M 9.8T 1% /mnt/nfsroot/rbd1
>
>
>
> # ceph df
> GLOBAL:
> SIZE AVAIL RAW USED %RAW USED
> 180T 130T 51157G 27.75
> POOLS:
> NAME ID USED %USED MAX AVAIL
> OBJECTS
> rbd 0 15745G 8.54 39999G
> 4043495
> cephfs_data 1 0 0 39999G
> 0
> cephfs_metadata 2 1962 0 39999G
> 20
> spider_stage 9 1595M 0 39999G 47835
> spider 10 955G 0.52 39999G
> 42541237
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com