Re: another Problem / question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 18, 2012 at 05:13, Jens Rehpöhler <jens.rehpoehler@xxxxxxxx> wrote:
> Hi all,
>
> todo i've done few performance tests with my two clusters.
>
> I've used : rados -p data load-gen --run-length 10 --target-throughput 50
>
> to generate some load. Now i have a question regarding "ceph -w" output.
>
> The output before the test was:
>
> 2012-01-18 11:55:28.621107    pg v113011: 594 pgs: 594 peering; 61022 MB
> data, 120 GB used, 244 GB / 372 GB avail
>
> here is the output after the test:
>
> 2012-01-18 13:53:30.190602    pg v113119: 594 pgs: 594 active+clean;
> 1058 GB data, 120 GB used, 244 GB / 372 GB avail
>
> How can i have more Data then the available discspace ? Is there any
> possiblity to identify the rados objects of this test and delete them
> manualy ?

A quick read of the source seems to say "rados load-gen" writes at
random offsets..

  size_t off = get_random(0, info.len);

which means you have a lot of sparse objects. And that makes the "1058
GB data, 120 GB used" make sense.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux