Re: Proplem about capacity when mount using CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Thank Sage,

tuantaba

On 07/16/2013 09:24 PM, Sage Weil wrote:
On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
Thanks  Sage,
I wories about returned capacity when mounting CephFS.
but when disk is full, capacity will showed 50% or 100% Used?
100%.

sage


On 07/16/2013 11:01 AM, Sage Weil wrote:
On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
Hi everyone.

I have 83 osds, and every osds have same 2TB, (Capacity sumary is 166TB)
I'm using replicate 3 for pools ('data','metadata').

But when mounting Ceph filesystem from somewhere (using: mount -t ceph
Monitor_IP:/ /ceph -o name=admin,secret=xxxxxxxxxx")
then capacity sumary is showed "160TB"?, I used replicate 3 and I think
that
it must return 160TB/3=50TB?

Filesystem                Size  Used Avail Use% Mounted on
192.168.32.90:/    160T  500G  156T   1%  /tmp/ceph_mount

Please, explain this  help me?
statfs/df show the raw capacity of the cluster, not the usable capacity.
How much data you can store is a (potentially) complex function of your
CRUSH rules and replication layout.  If you store 1TB, you'll notice the
available space will go down by about 2TB (if you're using the default
2x).

sage


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux