Re: CephFS mount shows the entire cluster size as apposed to custom-cephfs-pool-size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday, March 17, 2017 at 7:44 AM, Deepak Naidu <dnaidu@xxxxxxxxxx> wrote:
> , df always reports entire cluster
> size

... instead of CephFS data pool's size.

This issue has been recorded as a feature
request recently,
http://tracker.ceph.com/issues/19109

> Not sure, if this is still true with Jewel CephFS ie
> cephfs does not support any type of quota

If you were interested in setting quota on directories
in a FS, you can do that. See doc,
http://docs.ceph.com/docs/master/cephfs/quota/

You'd have to use the FUSE client (kernel client
does not support quotas),
http://docs.ceph.com/docs/master/cephfs/fuse/
and set a client config option,
client_quota = true
in Jewel releases (preferably use the latest v10.2.6).
An existing quota issue that was recently discussed
is here,
http://tracker.ceph.com/issues/17939

-Ramana

> 
> 
> 
> https://www.spinics.net/lists/ceph-users/msg05623.html
> 
> 
> 
> --
> 
> Deepak
> 
> 
> 
> 
> From: Deepak Naidu
> Sent: Thursday, March 16, 2017 6:19 PM
> To: 'ceph-users'
> Subject: CephFS mount shows the entire cluster size as apposed to
> custom-cephfs-pool-size
> 
> 
> 
> 
> Greetings,
> 
> 
> 
> I am trying to build a CephFS system. Currently I have created my crush map
> which uses only certain OSD & I have pools created out from them. But when I
> mount the cephFS the mount size is my entire ceph cluster size, how is that
> ?
> 
> 
> 
> 
> 
> Ceph cluster & pools
> 
> 
> 
> [ceph-admin@storageAdmin ~]$ ceph df
> 
> GLOBAL:
> 
> SIZE AVAIL RAW USED %RAW USED
> 
> 4722G 4721G 928M 0.02
> 
> POOLS:
> 
> NAME ID USED %USED MAX AVAIL OBJECTS
> 
> ecpool_disk1 22 0 0 1199G 0
> 
> rcpool_disk2 24 0 0 1499G 0
> 
> rcpool_cepfsMeta 25 4420 0 76682M 20
> 
> 
> 
> 
> 
> CephFS volume & pool
> 
> 
> 
> Here data0 is the volume/filesystem name
> 
> rcpool_cepfsMeta – is the meta-data pool
> 
> rcpool_disk2 – is the data pool
> 
> 
> 
> [ceph-admin@storageAdmin ~]$ ceph fs ls
> 
> name: data0 , metadata pool: rcpool_cepfsMeta, data pools: [rcpool_disk2 ]
> 
> 
> 
> 
> 
> Command to mount CephFS
> 
> sudo mount -t ceph mon1:6789:/ /mnt/cephfs/ -o
> name=admin,secretfile=admin.secret
> 
> 
> 
> 
> 
> Client host df –h output
> 
> 192.168.1.101:6789:/ 4.7T 928M 4.7T 1% /mnt/cephfs
> 
> 
> 
> 
> 
> 
> 
> --
> 
> Deepak
> 
> 
> 
> 
> 
> 
> 
> 
> 
> This email message is for the sole use of the intended recipient(s) and may
> contain confidential information. Any unauthorized review, use, disclosure
> or distribution is prohibited. If you are not the intended recipient, please
> contact the sender by reply email and destroy all copies of the original
> message.
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux