Ceph is a cool software but from time to time I am getting gray hairs
with it. And I hope that's because of a misunderstanding. This time I
want to balance the load between three osd's evenly (same usage %). Two
OSD are 2GB, one is 4GB (test environment). By the way: The pool is
erasure coded (k=2, m=1) and has a cache tier on top.
My osd tree is this:
# id weight type name up/down reweight
-1 8 root default
-4 8 room serverroom
-2 8 host test1
0 2 osd.0 up 1
1 2 osd.1 up 1
2 4 osd.2 up 1
resulting in this strange usage (where testdisk3 is the only usage of
the pool):
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 2085868 1209344 876524 58% /mnt/osd.0
/dev/sdb1 2085868 1208256 877612 58% /mnt/osd.1
/dev/sdc1 4183020 1211712 2971308 29% /mnt/osd.2
/dev/rbd0 3562916 2044692 1317520 61% /mnt/testdisk3
First thing: My weights are silently ignored.
Second thing: The used space is ~2045MB (see above). Multiplicated with
(k+m=3) and divided by (k=2) it should use 3068MB on the cluster. But at
the moment it uses 3629MB. Beside the metadata of xfs (say 50MB for each
drive) and the cache pool (max. 100MB) there is a difference of 300MB.
Where does this come from?
You have a three-part erasure-coded pool and three OSDs. Every piece of data is going to be stored in all of them evenly with that setup....
The extra used space is probably the osd journals, which appear to be co-located.
-Greg
Anybody an explanation for this?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com