Hi List,
I am working towards getting some performance number for openstack Cinderworkload generator tool for the same.
In short, Pblio is a synthetic
OLTP enterprise workload used to stress storage systems.
This benchmark will stress a storage system to determine the maximum number of IOPS it can manage
before having a mean response latency of 30 milliseconds or greater. More details
This benchmark will stress a storage system to determine the maximum number of IOPS it can manage
before having a mean response latency of 30 milliseconds or greater. More details
are available @ [1]
Currently I just ran pblio in a mock setup, details of the setup are below.
More real-world setup numbers will come soon.
Setup:
I have my host laptop, devstack AIO runs in a VM and inside that is my nova VM
(nested VM thus) where I am running pblio.
Nested VM runs F22. FWIW, my laptop has only SSDs so everything is SSD backed.
I have my host laptop, devstack AIO runs in a VM and inside that is my nova VM
(nested VM thus) where I am running pblio.
Nested VM runs F22. FWIW, my laptop has only SSDs so everything is SSD backed.
Screenshots:
[stack@devstack-f21 ~]$ [admin] ssh -i ./mykey.pem fedora@10.0.0.4 "lsblk"
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 418K 0 rom
vda 252:0 0 20G 0 disk
└─vda1 252:1 0 20G 0 part /
vdb 252:16 0 3G 0 disk
vdc 252:32 0 3G 0 disk
vdd 252:48 0 1G 0 disk
* So vdb, vdc and vdd are the glusterfs backed block devices in my nova VM
[stack@devstack-f21 ~]$ [admin] ssh -i ./mykey.pem fedora@10.0.0.4 "lsblk"
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 418K 0 rom
vda 252:0 0 20G 0 disk
└─vda1 252:1 0 20G 0 part /
vdb 252:16 0 3G 0 disk
vdc 252:32 0 3G 0 disk
vdd 252:48 0 1G 0 disk
* So vdb, vdc and vdd are the glusterfs backed block devices in my nova VM
Some of the pblio runs that I did ...
[root@vm1 pblio]# ./pblio -asu1=/dev/vdb -asu2=/dev/vdc -asu3=/dev/vdd -runlen=60 -bsu=2
-----
pblio
-----
Cache : None
ASU1 : 3.00 GB
ASU2 : 3.00 GB
ASU3 : 0.67 GB
BSUs : 2
Contexts: 1
Run time: 60 s
-----
Avg IOPS:100.57 Avg Latency:0.8021 ms
[root@vm1 pblio]# ./pblio -asu1=/dev/vdb -asu2=/dev/vdc -asu3=/dev/vdd -runlen=600 -bsu=60
-----
pblio
-----
Cache : None
ASU1 : 3.00 GB
ASU2 : 3.00 GB
ASU3 : 0.67 GB
BSUs : 60
Contexts: 1
Run time: 600 s
-----
Avg IOPS:3000.48 Avg Latency:6.1536 ms
[root@vm1 pblio]# ./pblio -asu1=/dev/vdb -asu2=/dev/vdc -asu3=/dev/vdd -runlen=600 -bsu=100
-----
pblio
-----
Cache : None
ASU1 : 3.00 GB
ASU2 : 3.00 GB
ASU3 : 0.67 GB
BSUs : 100
Contexts: 1
Run time: 600 s
-----
Avg IOPS:4831.87 Avg Latency:33.9057 ms
* So somewhere close (on the lesser side) to (100 * 50) is the max IOPS for this setup, given the above data
This is very very draft/mock run, no where close to real world setup
I wanted to share this with the broader audience, comments welcome.
More updates will come, once I run these tests on a larger close-to-real-world setup.
thanx,
deepak
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel