Re: Space available reported on Ceph file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes Bill,
but it would be nice to see the real space available reported at least
by the cephfs clients, retrieving the pool and the relative rep size
from the monitors and dividing accordingly the total space.

This could be a suggestion for Greg and the other guys working on the
first stable Cephfs release :)

Thanks
--
Marco Aroldi

2013/3/15 Campbell, Bill <bcampbell@xxxxxxxxxxxxxxxxxxxx>:
> Yes, that is the TOTAL amount in the cluster.
>
> For example, if you have a replica size of '3' , 81489 GB available, and
> you write 1 GB of data, then that data is written to the cluster 3 times,
> so your total available will be 81486 GB.  It definitely threw me off at
> first, but seeing as you can have multiple pools with different replica
> sizes it makes sense to report the TOTAL cluster availability, rather than
> trying to calculate how much is available based on replica size.
>
> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx
> [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Marco Aroldi
> Sent: Friday, March 15, 2013 3:49 PM
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  Space available reported on Ceph file system
>
> Hi,
> I have a test cluster of 80Tb raw.
> My pools are using rep size = 2, so the real storage capacity is 40Tb but
> I see in pgmap a total of 80Tb available and also the cephfs mounted on a
> client reports 80Tb available too I would expect to see somewhere a "40Tb
> available"
>
> Is this behavior correct?
> Thanks
>
> pool 0 'data' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num
> 2880 pgp_num 2880 last_change 1 owner 0 crash_replay_interval 45
>
> pgmap v796: 8640 pgs: 8640 active+clean; 8913 bytes data, 1770 MB used,
> 81489 GB / 81491 GB avail; 229B/s wr, 0op/s
>
> root@client1 ~ $ df -h
> Filesystem      Size  Used Avail Use% Mounted on
> 192.168.21.12:6789:/   80T  1,8G     80T   1% /mnt/ceph
>
> --
> Marco Aroldi
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux