Re: speedup ceph / scaling / find the bottleneck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 5, 2012 at 8:50 PM, Alexandre DERUMIER <aderumier@xxxxxxxxx> wrote:
> Hi,
> Stefan is on vacation for the moment,I don't know if he can reply you.
>
> But I can reoly for him for the kvm part (as we do same tests together in parallel).
>
> - kvm is 1.1
> - rbd 0.48
> - drive option rbd:pool/volume:auth_supported=cephx;none;keyring=/etc/pve/priv/ceph/ceph.keyring:mon_host=X.X.X.X";
> -using writeback
>
> writeback tuning in ceph.conf on the kvm host
>
> rbd_cache_size = 33554432
> rbd_cache_max_age = 2.0
>
> benchmark use in kvm guest:
> fio --filename=$DISK --direct=1 --rw=randwrite --bs=4k --size=200G --numjobs=50 --runtime=90 --group_reporting --name=file1
>
> results show max 14000io/s with 1 vm, 7000io/s by vm with 2vm,...
> so it doesn't scale
>
> (bench is with directio, so maybe writeback cache don't help)
>
> hardware for ceph , is 3 nodes with 4 intel ssd each. (1 drive can handle 40000io/s randwrite locally)

I'm interested in figuring out why we aren't getting useful data out
of the admin socket, and for that I need the actual configuration
files. It wouldn't surprise me if there are several layers to this
issue but I'd like to start at the client's endpoint. :)

Regarding the random IO, you shouldn't overestimate your storage.
Under plenty of scenarios your drives are lucky to do more than 2k
IO/s, which is about what you're seeing....
http://techreport.com/articles.x/22415/9
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux