Re: Is this situation about data lost?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote:
> Dear all:
> 
> I meet a strange situation. First, I show my ceph status as following:
> 
> cluster fb155b6a-5470-4796-97a4-185859ca6953
>   ......
>      osdmap e25234: 20 osds: 20 up, 20 in
>       pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316 kobjects
>             8202 GB used, 66170 GB / 74373 GB avail
>                 1056 active+clean
> 
> The replica size of my pool configuration is 2 and 8202 GB is total
> actual data usage and 5193 GB is about my actual data size.
> Is that right?
> 
> I though I have 5193 GB data, at least I should use the 5193 *
> 2(replica size) = 10386 GB ?
> Did anyone meet the same situation as me?
> or I just get something misunderstanding about the data usage in ceph status?
> 

Do all the OSDs have a dedicated filesystem or is there something else
on those filesystems?

The OSDs report back to the monitors on how much they used based on what
"df" tells them.

The data used comes from the Placement Group statistics however.

Can you check with "ceph df" to see in which pools the data is?

> many thanks!
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux