Re: ceph -w: Understanding "MB data" versus "MB used"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 25, 2015 at 1:24 AM, Saverio Proto <zioproto@xxxxxxxxx> wrote:
> Hello there,
>
> I started to push data into my ceph cluster. There is something I
> cannot understand in the output of ceph -w.
>
> When I run ceph -w I get this kinkd of output:
>
> 2015-03-25 09:11:36.785909 mon.0 [INF] pgmap v278788: 26056 pgs: 26056
> active+clean; 2379 MB data, 19788 MB used, 33497 GB / 33516 GB avail
>
>
> 2379MB is actually the data I pushed into the cluster, I can see it
> also in the "ceph df" output, and the numbers are consistent.
>
> What I dont understand is 19788MB used. All my pools have size 3, so I
> expected something like 2379 * 3. Instead this number is very big.
>
> I really need to understand how "MB used" grows because I need to know
> how many disks to buy.

"MB used" is the summation of (the programmatic equivalent to) "df"
across all your nodes, whereas "MB data" is calculated by the OSDs
based on data they've written down. Depending on your configuration
"MB used" can include thing like the OSD journals, or even totally
unrelated data if the disks are shared with other applications.

"MB used" including the space used by the OSD journals is my first
guess about what you're seeing here, in which case you'll notice that
it won't grow any faster than "MB data" does once the journal is fully
allocated.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux