Re: CephFS mount shows the entire cluster size as apposed to custom-cephfs-pool-size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not sure, if this is still true with Jewel CephFS ie

 

cephfs does not support any type of quota, df always reports entire cluster size.

 

https://www.spinics.net/lists/ceph-users/msg05623.html

 

--

Deepak

 

From: Deepak Naidu
Sent: Thursday, March 16, 2017 6:19 PM
To: 'ceph-users'
Subject: CephFS mount shows the entire cluster size as apposed to custom-cephfs-pool-size

 

Greetings,

 

I am trying to build a CephFS system. Currently I have created my crush map which uses only certain OSD & I have pools created out from them. But when I mount the cephFS the mount size is my entire ceph cluster size, how is that ?

 

 

Ceph cluster & pools

 

[ceph-admin@storageAdmin ~]$ ceph df

GLOBAL:

    SIZE      AVAIL     RAW USED     %RAW USED

    4722G     4721G         928M          0.02

POOLS:

    NAME                              ID     USED     %USED     MAX AVAIL     OBJECTS

    ecpool_disk1                22        0             0               1199G                0

    rcpool_disk2                 24        0             0               1499G                0

    rcpool_cepfsMeta     25     4420          0              76682M            20

 

 

CephFS volume & pool

 

Here data0 is the volume/filesystem name

rcpool_cepfsMeta – is the meta-data pool

rcpool_disk2 – is the data pool

 

[ceph-admin@storageAdmin ~]$ ceph fs ls

name: data0, metadata pool: rcpool_cepfsMeta, data pools: [rcpool_disk2 ]

 

 

Command to mount CephFS

sudo mount -t ceph mon1:6789:/ /mnt/cephfs/ -o name=admin,secretfile=admin.secret

 

 

Client host df –h output

192.168.1.101:6789:/     4.7T  928M  4.7T   1% /mnt/cephfs

 

 

 

--

Deepak

 

 

 

 


This email message is for the sole use of the intended recipient(s) and may contain confidential information.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux