Hello everyone, we have encountered a problem, I hope someone might
be able to help. The facts are the following: there is a raid
consisting of 40 x 500gb disks, located on two 3ware raid controllers and joint by lvm. Its performance
under heavy load reaches on the network the maximum of the ethernet cards, which is 2Gbit/sec
(2x1Gb ethernet cards on the linux side bonding, on the cisco side etherchannel) If we encrypt with loop-aes (128bit) the only lvm
ensured device (/dev(vg0/lv0 -> /dev/loop0), the performance of the server falls down to 800Mbit/s and the loop0 device uses
only once cpu on 100%. We configured the system according to the loop-aes
README file's performance part, which looks something like this: 3ware / 20x500gbyte disks (raid50) = /dev/sdb 3ware / 20x500gbyte disks (raid50) = /dev/sdc lvm - /dev/vg0/lv0 = /dev/sdb,/dev/sdc if the loop devices are ordered to sdb->loop0 and
sdc->loop1 and so the lvm connects the /dev/loop0 and /dev/loop1 devices, it looks as follows: 3ware / 20x500gbyte disks (raid50) = /dev/sdb ->
/dev/loop0 3ware / 20x500gbyte disks (raid50) = /dev/sdc ->
/dev/loop1 lvm - /dev/vg0/lv0 = /dev/loop0,/dev/loop1 So we have two loop devices now and the read processes
split into 50%-50% on the two loop devices and 1.6Gbit/sec traffic is reached. All above shows, that on one loop device maximum
800Mbit/sec traffic can go through in the present configuration. The question
is that is there any patch, trick, translation option which
can brake the 800Mbit/sec borderline? Also, what is the optimal setting for the lo_prealloc
value when the loop module is loaded in? Should we add a higher value or is
there a limit for it? Also, is it better to set the read
ahead value on the physical array (/dev/sdX) and/or to the device connected to
it (/dev/loopX)? Or both should have the same value? Thank you in advance, and hope if we resolve this
problem it helps other people in the future, Viktor |