I found a problem with lock_nolock. If I
mount two GFS nodes to the same GFS filesystem, actually each node just treats
it as local drive, whenever I make some change on one node, the change won’t
show up on the other node. I don’t know if this is why the manual says
that lock_nolock only works for single node, or is there a workaround to
resolve this issue? Thank you all. From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Hong Zheng Today, I tried the configuration with
lock_nolock. I configured one GFS node with lock_nolock and the performance
acts as a local drive. But here is the question, I still want to make it as a cluster
at least a active-passive cluster. Since for active-passive cluster every time
there is only one active node, I assume the data will be consistent when the
backup node takes over. I’m not sure if this is a compromise way to keep
the better performance and data consistence. From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Hong Zheng Thanks, Kevin. Actually I did try the way you recommend. I configured one
GFS application node with software iscsi initiator and two lock_gulm servers,
the data transfer speed just improved a little bit, but for our application the
performance is about the same. Do you know if there is a way to tune GFS
performance? |
-- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster