Hi, Thank you for your help. After changing these settings the linux guest got en increase in speed. The FreeNAS guest still has an write speed of 10MB/s. The disk driver is virtio and has en Write back cache. What am I missing? Kinds regards, Michiel Piscaer On 23-11-16 08:05, Оралов, Алексей С. wrote: > hello, Michiel > > Use hdd driver "virtio" and cache "Write back". > > And also on proxmox node add ceph client configuration: > > /etc/ceph/ceph.conf > > [client] > #admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok > rbd cache = true > rbd_cache_writethrough_until_flush = true > rbd_readahead_disable_after_bytes=0 > rbd_default_format = 2 > > #Tuning options > #rbd_cache_size = 67108864 #64M > #rbd_cache_max_dirty = 50331648 #48M > #rbd_cache_target_dirty = 33554432 #32M > #rbd_cache_max_dirty_age = 2 > #rbd_op_threads = 10 > #rbd_readahead_trigger_requests = 10 > > > 23.11.2016 9:53, M. Piscaer пишет: >> Hi, >> >> I have an little performance problem with KVM and Ceph. >> >> I'm using Proxmox 4.3-10/7230e60f, with KVM version >> pve-qemu-kvm_2.7.0-8. Ceph is on version jewel 10.2.3 on both the >> cluster as the client (ceph-common). >> >> The systems are connected to the network via an 4x bonding with an total >> of 4 Gb/s. >> >> Within an guest, >> - when I do an write to I get about 10 MB/s. >> - Also when I try to do an write within the guest but then directly to >> ceph I get the same speed. >> - But when I mount an ceph object on the Proxmox host I get about 110MB/s >> >> The guest is connected to interface vmbr160 → bond0.160 → bond0. >> >> This bridge vmbr160 has an IP address with the same subnet as the ceph >> cluster with an mtu 9000. >> >> The KVM block device is an virtio device. >> >> What can I do to solve this problem? >> >> Kind regards, >> >> Michiel Piscaer >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com