Re: Space available reported on Ceph file system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marco,

There are definitely folks who would love to see exactly what you are
asking for.  However, it's not always as simple as it might seem.
With the ability to set replication levels per pool, and given that no
space is used until you write data to a given pool there are often too
many variables to really understand the "usable" space in the cluster.

In the future we'll have some ways to gather these variables to let
you create a tool to determine it yourself.  Beyond that there are no
solid plans to do what you are asking in the short term.  Does that
make sense?


Best Regards,

Patrick McGarry
Director, Community || Inktank

http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank


On Fri, Mar 15, 2013 at 4:10 PM, Marco Aroldi <marco.aroldi@xxxxxxxxx> wrote:
> Yes Bill,
> but it would be nice to see the real space available reported at least
> by the cephfs clients, retrieving the pool and the relative rep size
> from the monitors and dividing accordingly the total space.
>
> This could be a suggestion for Greg and the other guys working on the
> first stable Cephfs release :)
>
> Thanks
> --
> Marco Aroldi
>
> 2013/3/15 Campbell, Bill <bcampbell@xxxxxxxxxxxxxxxxxxxx>:
>> Yes, that is the TOTAL amount in the cluster.
>>
>> For example, if you have a replica size of '3' , 81489 GB available, and
>> you write 1 GB of data, then that data is written to the cluster 3 times,
>> so your total available will be 81486 GB.  It definitely threw me off at
>> first, but seeing as you can have multiple pools with different replica
>> sizes it makes sense to report the TOTAL cluster availability, rather than
>> trying to calculate how much is available based on replica size.
>>
>> -----Original Message-----
>> From: ceph-users-bounces@xxxxxxxxxxxxxx
>> [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Marco Aroldi
>> Sent: Friday, March 15, 2013 3:49 PM
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject:  Space available reported on Ceph file system
>>
>> Hi,
>> I have a test cluster of 80Tb raw.
>> My pools are using rep size = 2, so the real storage capacity is 40Tb but
>> I see in pgmap a total of 80Tb available and also the cephfs mounted on a
>> client reports 80Tb available too I would expect to see somewhere a "40Tb
>> available"
>>
>> Is this behavior correct?
>> Thanks
>>
>> pool 0 'data' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num
>> 2880 pgp_num 2880 last_change 1 owner 0 crash_replay_interval 45
>>
>> pgmap v796: 8640 pgs: 8640 active+clean; 8913 bytes data, 1770 MB used,
>> 81489 GB / 81491 GB avail; 229B/s wr, 0op/s
>>
>> root@client1 ~ $ df -h
>> Filesystem      Size  Used Avail Use% Mounted on
>> 192.168.21.12:6789:/   80T  1,8G     80T   1% /mnt/ceph
>>
>> --
>> Marco Aroldi
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux