Hello,
Might sounds strange, but I could not find answer in google or docs, might be called somehow else.
I dont understand pool capacity policy and how to set/define it. I have created simple cluster for CephFS on 4 servers, each has 30gb disk - so in total 120gb. On top I build replicated metapool with size of 3 and erasure pool for data k=2, m=1. Made CephFS, things look good. "ceph df" shows, that not all space is used.
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
119G 68941M 53922M 43.89
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
cephfs_data 1 28645M 42.22 39210M 61674
cephfs_metadata 2 171M 0.87 19605M 1089
pg_num for medata is 8
pg_num for data is 40
Am I doing anything wrong? I want as much space for data as possible.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com