Hello Robert, I’m certainly open to a call – when is good for you? Thanks for
suggesting it. In all tests, I/O was being performed on a single node only, and
on the same machine in all cases. The cluster has 7 nodes and the GFS volumes
were mounted on all of them, but the other 6 systems were quiesced for the test
window. I was trying to ascertain what performance and/or overhead was
incurred through GFS. The GFS volumes are managed by LVM and are on iSCSI
targets on an EqualLogic PS50E appliance with 14 250GB drives in a single large
RAID-5 array. There are three GbE connections from the EqualLogic to the core
Cisco switch, and the 7 nodes are also directly into the same switch, and all
devices in the test environment are on the same VLAN. No HBAs were used, and each server in the cluster (all Dell
PowerEdge 2950 servers with single Intel Xeon quad-core X5365 @ 3.00GHz
processors and 16GB of RAM) is using a single GbE port for both general network
connectivity as well as for the iSCSI initiator. While I was concerned about
the overhead of software/CPU-based iSCSI, but the testing with iSCSI LUNs
without GFS (and without LVM) showed really good throughput, close enough
perhaps to where the network overhead was in play. It might be possible to gain more performance by using bonding,
though we’re nowhere close to saturating GbE. Perhaps separating general
network I/O from iSCSI would help as well. We’re not going to invest in HBAs
at this time, but in the future it might be interesting to see how much
difference they make. A challenge we’re dealing with is a massive number of small
files, so there is a lot of file-level overhead, and as you saw in the charts…the
random reads and writes were not friends of GFS. Perhaps there are GFS tuning
parameters that would specifically help speed up reading and writing of many
small files in succession rather than a small number of files which might well
be cached in RAM by Linux. I realize that there are numerous OS, filesystem and I/O knobs
which can be tuned for our application, but how likely is it that we could
overcome a 63% performance degradation? Mind you, I’m not factoring in the “gains”
realized by having a single shared filesystem. Also, we will be looking at
ways to have multiple readers on GFS with no locking or “locking-lite” if there
is such a thing. Does anyone here have experience with configuring GFS to be
read-only for some or all nodes and not require locking? - K -- Kamal Jain kjain@xxxxxxxxxxxxxxxxxxx +1 978.893.1098 (office) +1 978.726.7098 (mobile) Auraria Networks, Inc. 85 Swanson Road, Suite 120 Boxborough, MA 01719 USA www.aurarianetworks.com |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster