Re: KVM / Ceph performance problems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After changing the settings below. The linux guest has an good write speed.

But still the FreeNas guest stays on 10MB/s.

After doing some test on freebsd with with an bigger blocksize:
"dd if=/dev/zero of=testfile bs=9000" I get about 80MB/s.

With "dd if=/dev/zero of=testfile" the speed is 10MB/s.

What can I do?

Kind regards,

Michiel Piscaer




On 23-11-16 10:02, M. Piscaer wrote:
> Hi,
> 
> Thank you for your help.
> 
> After changing these settings the linux guest got en increase in speed.
> The FreeNAS guest still has an write speed of 10MB/s.
> 
> The disk driver is virtio and has en Write back cache.
> 
> What am I missing?
> 
> Kinds regards,
> 
> Michiel Piscaer
> 
> On 23-11-16 08:05, Оралов, Алексей С. wrote:
>> hello, Michiel
>>
>> Use hdd driver "virtio" and cache "Write back".
>>
>> And also on proxmox node add ceph client configuration:
>>
>> /etc/ceph/ceph.conf
>>
>> [client]
>> #admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
>> rbd cache = true
>> rbd_cache_writethrough_until_flush = true
>> rbd_readahead_disable_after_bytes=0
>> rbd_default_format = 2
>>
>> #Tuning options
>> #rbd_cache_size = 67108864  #64M
>> #rbd_cache_max_dirty = 50331648  #48M
>> #rbd_cache_target_dirty = 33554432  #32M
>> #rbd_cache_max_dirty_age = 2
>> #rbd_op_threads = 10
>> #rbd_readahead_trigger_requests = 10
>>
>>
>> 23.11.2016 9:53, M. Piscaer пишет:
>>> Hi,
>>>
>>> I have an little performance problem with KVM and Ceph.
>>>
>>> I'm using Proxmox 4.3-10/7230e60f, with KVM version
>>> pve-qemu-kvm_2.7.0-8. Ceph is on version jewel 10.2.3 on both the
>>> cluster as the client (ceph-common).
>>>
>>> The systems are connected to the network via an 4x bonding with an total
>>> of 4 Gb/s.
>>>
>>> Within an guest,
>>> - when I do an write to I get about 10 MB/s.
>>> - Also when I try to do an write within the guest but then directly to
>>> ceph I get the same speed.
>>> - But when I mount an ceph object on the Proxmox host I get about 110MB/s
>>>
>>> The guest is connected to interface vmbr160 → bond0.160 → bond0.
>>>
>>> This bridge vmbr160 has an IP address with the same subnet as the ceph
>>> cluster with an mtu 9000.
>>>
>>> The KVM block device is an virtio device.
>>>
>>> What can I do to solve this problem?
>>>
>>> Kind regards,
>>>
>>> Michiel Piscaer
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>

-- 

E-mail:   michiel@xxxxxxxxxxxxxxx
Telefoon: +31 77 7501700
Fax:      +31 77 7501701
Mobiel:   +31 6 16048782
Threema:  PBPCM9X3
PGP:      0x09F8706A
W3:       www.digidiensten.nl
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux