Re: Is this situation about data lost?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote:
>> Dear all:
>>
>> I meet a strange situation. First, I show my ceph status as following:
>>
>> cluster fb155b6a-5470-4796-97a4-185859ca6953
>>   ......
>>      osdmap e25234: 20 osds: 20 up, 20 in
>>       pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316 kobjects
>>             8202 GB used, 66170 GB / 74373 GB avail
>>                 1056 active+clean
>>
>> The replica size of my pool configuration is 2 and 8202 GB is total
>> actual data usage and 5193 GB is about my actual data size.
>> Is that right?
>>
>> I though I have 5193 GB data, at least I should use the 5193 *
>> 2(replica size) = 10386 GB ?
>> Did anyone meet the same situation as me?
>> or I just get something misunderstanding about the data usage in ceph status?
>>
>
> Do all the OSDs have a dedicated filesystem or is there something else
> on those filesystems?
>

yes, I make xfs per OSD and no other data store on those filesystems.

> The OSDs report back to the monitors on how much they used based on what
> "df" tells them.
>

This means I can manually calculate the df return value for the data
usage result about ceph status?

> The data used comes from the Placement Group statistics however.
>

So could I get some methods to check with PG for confirming the
Integrity of my data?

> Can you check with "ceph df" to see in which pools the data is?
>

there are my ceph df return value:

GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    74373G     66170G        8202G         11.03
POOLS:
    NAME         ID     USED      %USED     MAX AVAIL     OBJECTS
    volumes      5      4734G      6.37        31926G     1289307
    images       6       458G      0.62        31926G       58897
    data         7          0         0        31926G           0
    metadata     8          0         0        31926G           0

If you need more information please let me know.

Thanks!!

>> many thanks!
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux