Hi Blairo
thanks for the reply :) On 10/06/2013 02:11 AM, Blair Bethwaite wrote:
still yet numbers don't add up, or the overhead is really very huge; # ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 11178G 1783G 8986G 80.39 POOLS: NAME ID USED %USED OBJECTS data 0 0 0 0 metadata 1 40100K 0 30 rbd 2 3703G 33.13 478583 rbd is using 33% and raw usage is 80%. 33x2=66... that means we have 80-66=13% over head on raw storage, maybe someone can confirm the overhead? beside the "ceph df" shows that metadata is only 40MB which is nothing. i an not even using all 4TB i allocated for the rbd image. i have ~11.2TB of raw storage & 3.7GB (actually its 3700/1024=3.61) rbd image, 3.7x2=7.4TB, raw usage is ~9TB, so i have 1.6TB of overhead for each 3.7TB, that is 1.6/3.7=43% on data..... that can't be correct. maybe its woth mentioning that my OSDs are formatted as btrfs. i don't think that btrfs have 13% overhead. or dose it?
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com