understanding how to tune end-to-end

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings,

I've successfully built a Linux storage device for testing using tgt to
serve logical volumes on 1.5TB of SSDs in a RAID0 over iSER with IB as
the interconnect. This device was built to run and test kvm guests from.
I'm testing with RAID0 here just for the speed - redundancy is not
important yet.

The hosts attach to the storage via iSER, and then expose these luns to
their kvm guests. Already the speeds are very impressive, but I'd like
to better understand how all of these layers interplay, so I can learn
to better tune the system.

I wrote a script to do a bunch of fio tests from a guest through to the
storage device. The guest has 4 vcpus and 12GB ram and uses virtio. The
tests try a bunch of block sizes and threads (1-4) to see how the
different settings relate to bandwidth, iops, etc. An online copy can be
seen here: http://snipurl.com/25a2m2n

the script I used is here: http://snipurl.com/25a2nan
if anyone is interested.

I'm not really sure what to tweak at what point in the stack to maximize
guest performance. Can anyone shed some light on this?

I realize there are many unrelated-to-tgt layers here, but thanks for
any advice you can give.

Regards,
Christopher

--
To unsubscribe from this list: send the line "unsubscribe stgt" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux SCSI]     [Linux RAID]     [Linux Clusters]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]

  Powered by Linux