Re: ceph uses too much disk space!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Blairo
thanks for the reply :)

On 10/06/2013 02:11 AM, Blair Bethwaite wrote:
Hi Ali,

> Message: 1
> Date: Sat, 05 Oct 2013 09:22:22 +0300
> From: Linux Chips <linux.chips@xxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Subject: ceph uses too much disk space!!
> Message-ID: <524FB01E.3000907@xxxxxxxxx>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi every one;
> we have a small testing cluster, one node with 4 OSDs of 3TB each. i
> created one RBD image of 4TB. now the cluster is nearly full:
<SNIP>
> /dev/sda                    2.8T  2.1T  566G  79% /var/lib/ceph/osd/ceph-0
> /dev/sdb                    2.8T  2.4T  316G  89% /var/lib/ceph/osd/ceph-1
> /dev/sdc                    2.8T  2.2T  457G  84% /var/lib/ceph/osd/ceph-2
> /dev/sdd                    2.8T  2.2T  447G  84% /var/lib/ceph/osd/ceph-3
>
> # ceph osd pool get rbd min_size
> min_size: 1
>
> # ceph osd pool get rbd size
> size: 2
>
>
> 4 disk at 3TB should give me 12TB, and 4TBx2 should be 8TB. that is 66%
> not 80% as the ceph df shows (%RAW).
> where is this space is leaking? how can i fix it?
> or is this normal behavior and this is due to overhead?

I'm not sure what overhead there might be from Ceph's metadata, but I think you might be basing your calculations on bad assumptions to begin with:
1) define your 3TB OSD drive size properly (probably 3000GB or slightly less)
2) you need to account for OSD filesystem overhead, e.g., format a drive and take a look at usage (probably <95% of raw capacity)

Indeed, if you look at your df output you've got ~2.8TB of total capacity per drive, and if you observed that prior to writing any data you'd see some already in-use by the filesystem. And df is showing just over 2TB per drive used, which makes sense given you've created a 4TiB rbd in a storage pool with a replication factor of 2 (i.e., 2 copies of your rbd are stored across the osds).


still yet numbers don't add up, or the overhead is really very huge;

# ceph df
GLOBAL:
    SIZE       AVAIL     RAW USED     %RAW USED
    11178G     1783G     8986G        80.39

POOLS:
    NAME         ID     USED       %USED     OBJECTS
    data         0      0          0         0
    metadata     1      40100K     0         30
    rbd          2      3703G      33.13     478583


rbd is using 33% and raw usage is 80%. 33x2=66... that means we have 80-66=13% over head on raw storage, maybe someone can confirm the overhead? beside the "ceph df" shows that metadata is only 40MB which is nothing.
i an not even using all 4TB i allocated for the rbd image. i have ~11.2TB of raw storage & 3.7GB (actually its 3700/1024=3.61) rbd image, 3.7x2=7.4TB, raw usage is ~9TB, so i have 1.6TB of overhead for each 3.7TB, that is 1.6/3.7=43% on data..... that can't be correct.

Also, keep in mind that Ceph is breaking up your rbd into chunks on the host filesystem/s and probably storing metadata (in extended attributes) for every chunk.

maybe its woth mentioning that my OSDs are formatted as btrfs. i don't think that btrfs have 13% overhead. or dose it?

--3
Cheers,
~Blairo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux