Thank Wido den Hollander! Migrate journaling to /dev/sdc1 and rados bench -p my_pool 300 write Total time run: 300.356865 Total writes made: 7902 Write size: 4194304 Bandwidth (MB/sec): 105.235 Press any key to continue... 22.01.2014, 13:08, "Никитенко Виталий" <v1t83@xxxxxxxxx>: > Hi, Wido den Hollander > >>> Good day! Please help me solve the problem. There are the following scheme : >>> Server ESXi with 1Gb NICs. it has local store store2Tb and two isci storage connected to the second server . >>> The second server supermicro: two 1TB hdd (lsi 9261-8i with battery), 8 CPU cores, 32 GB RAM and 2 1Gb NICs . On /dev/sda installed ubuntu 12 and ceph-emperor. /dev/sdb disk placed under osd.0. >> How do you do journaling? > > When i create osd i see: > NFO:ceph-disk:Will colocate journal with data on /dev/sdb > >> Have you tried TGT instead? > > I tried tgt (with --bstype rbd) and result same. > >> Have you also tried to run a rados benchmark? (rados bench) > > rados bench -p my_pool 300 write > > Total time run: 30.821284 > Total writes made: 371 > Write size: 4194304 > Bandwidth (MB/sec): 48.149 > > Stddev Bandwidth: 38.1729 > Max bandwidth (MB/sec): 116 > Min bandwidth (MB/sec): 0 > Average Latency: 1.31857 > Stddev Latency: 1.6014 > Max latency: 9.2685 > Min latency: 0.013897 > >> Also, be aware that Ceph excels in it's parallel performance. You >> shouldn't look at the performance of a single "LUN" or RBD image that >> much, it's much more interesting to see the aggegrated performance of 10 >> or maybe 100 "LUNs" together. > > I dont understand how do it. I must create 10 LUN and 10 iscsi storage on esxi, and then test migrate 10 VM from local store to iscsi storage? > > Thanks! Vitaliy _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com