Hi, The cluster consists of two nodes: - a partition on node1 is configured as pool device. It should be used for cluster storage - node1 exports its device via gnbd and serves as lock_gulm master - node2 imports this device and is logged in as lock_gulm client on node1 - ccs config archives are stored locally on both machines. Until i try to mount the gfs fs on the first node everythings seems to be find to me. But when i try to mount the pool device (/dev/pool/pool1) on the first node, it acquires journal lock for all existing journals on the gfs fs, making it imposible to mount it from another node. Can someone tell me why this happens ? Here are my ccs config files: fence_devices { c1-locksrv { agent = "fence_gnbd" server = "cluster1" } manual-reset { agent = "fence_manual" } } nodes { cluster1 { ip_interfaces { eth0 = "192.168.0.1" } fence { fenceCluster1 { manual-reset { ipaddr = "192.168.0.1" } } } } cluster2 { ip_interfaces { eth0 = "192.168.0.2" } fence { fenceCluster2 { c1-locksrv { ipaddr = "192.168.0.2" } } } } } cluster { name = "rac" lock_gulm { servers = ["cluster1"] heartbeat_rate = 3 allowed_misses = 10 } } -- DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen! AKTION "Kein Einrichtungspreis" nutzen: http://www.gmx.net/de/go/dsl