performance tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El 09/07/14 13:14, hua peng escribi?:
> what're the IO throughput (MB/s) for the test cases?
>
> Thanks.
Hi Hua,

the throughput in each test is IOPS x 4K block size, all tests are
random write.

Xabier
>
> On 14-7-9 ??6:57, Xabier Elkano wrote:
>>
>>
>> Hi,
>>
>> I was doing some tests in my cluster with fio tool, one fio instance
>> with 70 jobs, each job writing 1GB random with 4K block size. I did this
>> test with 3 variations:
>>
>> 1- Creating 70 images, 60GB each, in the pool. Using rbd kernel module,
>> format and mount each image as ext4. Each fio job writing in a separate
>> image/directory. (ioengine=libaio, queue_depth=4, direct=1)
>>
>>     IOPS: 6542
>>     AVG LAT: 41ms
>>
>> 2- Creating 1 large image 4,2TB in the pool. Using rbd kernel module,
>> format and mount the image as ext4. Each fio job writing in a separate
>> file in the same directory. (ioengine=libaio, queue_depth=4,direct=1)
>>
>>    IOPS: 5899
>>    AVG LAT:  47ms
>>
>> 3- Creating 1 large image 4,2TB in the pool. Use ioengine rbd in fio to
>> access the image through librados. (ioengine=rbd,
>> queue_depth=4,direct=1)
>>
>>    IOPS: 2638
>>    AVG LAT: 96ms
>>
>> Do these results make sense? From Ceph perspective, It is better to have
>> many small images than a larger one? What is the best approach to
>> simulate the workload of 70 VMs?
>>
>>
>> thanks in advance or any help,
>> Xabier
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux