Re: understanding pool capacity and usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Den fre 27 juli 2018 kl 12:24 skrev Anton Aleksandrov <anton@xxxxxxxxxxxxxx>:

Hello,

Might sounds strange, but I could not find answer in google or docs, might be called somehow else.

I dont understand pool capacity policy and how to set/define it. I have created simple cluster for CephFS on 4 servers, each has 30gb disk - so in total 120gb. On top I build replicated metapool with size of 3 and erasure pool for data k=2, m=1. Made CephFS, things look good. "ceph df" shows, that not all space is used.

ceph df
GLOBAL:
    SIZE     AVAIL      RAW USED     %RAW USED
    119G     68941M       53922M         43.89
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    cephfs_data         1      28645M     42.22        39210M       61674
    cephfs_metadata     2        171M      0.87        19605M        1089

pg_num for medata is 8
pg_num for data is 40

Am I doing anything wrong? I want as much space for data as possible.



Looks like it says that cephfs_data can write at most 39G, which at 3x replication makes it consume close to 120G in total. How did this differ from your expectations? 


--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux