Hi, Виталий.
Whether a sufficient number of PGS?2014/1/17 Никитенко Виталий <v1t83@xxxxxxxxx>
Good day! Please help me solve the problem. There are the following scheme :
Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage connected to the second server .
The second server supermicro: two 1TB hdd (lsi 9261-8i with battery), 8 CPU cores, 32 GB RAM and 2 1Gb NICs . On /dev/sda installed ubuntu 12 and ceph-emperor. /dev/sdb disk placed under osd.0.
What i do next:
# rbd create esxi
# rbd map esxi
Get /dev/rbd1 which shared using iscsitarget
# cat ietd.conf
Target iqn.2014-01.ru.ceph: rados.iscsi.001
Lun 0 Path = / dev/rbd1, Type = blockio, ScsiId = f817ab
Target iqn.2014-01.ru.ceph: rados.iscsi.002
Lun 1 Path = / opt/storlun0.bin, Type = fileio, ScsiId = lun1, ScsiSN = lun1
For test I also create iscsi storage on /dev/sda (Lun1).
When migrating a virtual machine from store2Tb to Lun0 (ceph) the rate of migration of 400-450 Mbit/second.
When migrating a VM from store2Tb to Lun1 (ubuntu file) then the rate of migration of 800-900 Mbit / second.
>From this I conclude that the rate is not limited by disk(controller) and not to the network.
Tried osd format to ext4 and xfs and btrfs but same speed. For me, speed is very important , especially since the plan
translate 10Gb network links.
Thanks.
Vitaliy
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com