Re: Proplem about capacity when mount using CephFS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Watching.  Thanks, Neil.

On Tue, Jul 16, 2013 at 12:43 PM, Neil Levine <neil.levine@xxxxxxxxxxx> wrote:
> This seems like a good feature to have. I've created
> http://tracker.ceph.com/issues/5642
>
> N
>
>
> On Tue, Jul 16, 2013 at 8:05 AM, Greg Chavez <greg.chavez@xxxxxxxxx> wrote:
>>
>> This is interesting.  So there are no built-in ceph commands that can
>> calculate your usable space?  It just so happened that I was going to
>> try and figure that out today (new Openstack block cluster, 20TB total
>> capacity) by skimming through the documentation.  I figured that there
>> had to be a command that would do this.  Blast and gadzooks.
>>
>> On Tue, Jul 16, 2013 at 10:37 AM, Ta Ba Tuan <tuantb@xxxxxxxxxx> wrote:
>> >
>> > Thank Sage,
>> >
>> > tuantaba
>> >
>> >
>> > On 07/16/2013 09:24 PM, Sage Weil wrote:
>> >>
>> >> On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
>> >>>
>> >>> Thanks  Sage,
>> >>> I wories about returned capacity when mounting CephFS.
>> >>> but when disk is full, capacity will showed 50% or 100% Used?
>> >>
>> >> 100%.
>> >>
>> >> sage
>> >>
>> >>>
>> >>> On 07/16/2013 11:01 AM, Sage Weil wrote:
>> >>>>
>> >>>> On Tue, 16 Jul 2013, Ta Ba Tuan wrote:
>> >>>>>
>> >>>>> Hi everyone.
>> >>>>>
>> >>>>> I have 83 osds, and every osds have same 2TB, (Capacity sumary is
>> >>>>> 166TB)
>> >>>>> I'm using replicate 3 for pools ('data','metadata').
>> >>>>>
>> >>>>> But when mounting Ceph filesystem from somewhere (using: mount -t
>> >>>>> ceph
>> >>>>> Monitor_IP:/ /ceph -o name=admin,secret=xxxxxxxxxx")
>> >>>>> then capacity sumary is showed "160TB"?, I used replicate 3 and I
>> >>>>> think
>> >>>>> that
>> >>>>> it must return 160TB/3=50TB?
>> >>>>>
>> >>>>> Filesystem                Size  Used Avail Use% Mounted on
>> >>>>> 192.168.32.90:/    160T  500G  156T   1%  /tmp/ceph_mount
>> >>>>>
>> >>>>> Please, explain this  help me?
>> >>>>
>> >>>> statfs/df show the raw capacity of the cluster, not the usable
>> >>>> capacity.
>> >>>> How much data you can store is a (potentially) complex function of
>> >>>> your
>> >>>> CRUSH rules and replication layout.  If you store 1TB, you'll notice
>> >>>> the
>> >>>> available space will go down by about 2TB (if you're using the
>> >>>> default
>> >>>> 2x).
>> >>>>
>> >>>> sage
>> >>>
>> >>>
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> \*..+.-
>> --Greg Chavez
>> +//..;};
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 
\*..+.-
--Greg Chavez
+//..;};
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux