Re: speedup ceph / scaling / find the bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jul 1, 2012 at 11:12 PM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
> Am 02.07.2012 07:02, schrieb Alexandre DERUMIER:
>
>> Hi,
>> my 2cent,
>> maybe with lower range (like 100MB) of random io,
>> you have more chance to aggregate them in 4MB block ?
>
>
> Yes maybe. If you have just a range of 100MB the chance you'll hit the same
> 4MB block again is very high.
>
> @sage / mark
> How does the aggregation work? Does it work 4MB blockwise or target node
> based?
Aggregation is based on the 4MB blocks, and if you've got caching
enabled then it's also not going to flush them out to disk very often
if you're continuously updating the block — I don't remember all the
conditions, but essentially, you'll run into dirty limits and it will
asynchronously flush out the data based on a combination of how old it
is, and how long it's been since some version of it was stable on
disk.


On Mon, Jul 2, 2012 at 6:19 AM, Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
> Hello,
>
> i just want to report back some test results.
>
> Just some results from a sheepdog test using the same hardware.
>
> Sheepdog:
>
> 1 VM:
>   write: io=12544MB, bw=142678KB/s, iops=35669, runt= 90025msec
>   read : io=14519MB, bw=165186KB/s, iops=41296, runt= 90003msec
>   write: io=16520MB, bw=185842KB/s, iops=45, runt= 91026msec
>   read : io=102936MB, bw=1135MB/s, iops=283, runt= 90684msec
>
> 2 VMs:
>   write: io=7042MB, bw=80062KB/s, iops=20015, runt= 90062msec
>   read : io=8672MB, bw=98661KB/s, iops=24665, runt= 90004msec
>   write: io=14008MB, bw=157443KB/s, iops=38, runt= 91107msec
>   read : io=43924MB, bw=498462KB/s, iops=121, runt= 90234msec
>
>   write: io=6048MB, bw=68772KB/s, iops=17192, runt= 90055msec
>   read : io=9151MB, bw=104107KB/s, iops=26026, runt= 90006msec
>   write: io=12716MB, bw=142693KB/s, iops=34, runt= 91253msec
>   read : io=59616MB, bw=675648KB/s, iops=164, runt= 90353msec
>
>
> Ceph:
> 2 VMs:
>   write: io=2234MB, bw=25405KB/s, iops=6351, runt= 90041msec
>   read : io=4760MB, bw=54156KB/s, iops=13538, runt= 90007msec
>   write: io=56372MB, bw=638402KB/s, iops=155, runt= 90421msec
>   read : io=86572MB, bw=981225KB/s, iops=239, runt= 90346msec
>
>   write: io=2222MB, bw=25275KB/s, iops=6318, runt= 90011msec
>   read : io=4747MB, bw=54000KB/s, iops=13500, runt= 90008msec
>   write: io=55300MB, bw=626733KB/s, iops=153, runt= 90353msec
>   read : io=84992MB, bw=965283KB/s, iops=235, runt= 90162msec

I can't quite tell what's going on here, can you describe the test in
more detail?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux