Hi, We're working on a 2-nodes pacemaker cluster that provides iSCSI LUNs via SCST. The LUNs are either software RAID arrays located in a shared JBOD, or DRBD ressources (active/passive). We are adding bcache to the game with local SSDs (ie not shared, but dedicated to each cluster node). We are using write-through. I need to evaluate the risk when moving a backing device (md) from cacheset1 (on node #1) to cacheset2 (on node #2) and then back to cacheset #1. Scenario - md attached to cacheset1 and working (on node 1) - md detached from cacheset1 - md stopped on node 1 - md started on node 2 - md attached to cacheset2 on node 2 At this point, cacheset1 is attached to nothing, but still has valid blocks "linked" to the backing md device - md detached from cacheset2 - md stopped on node 2 - md started on node 1 - md RE-attached to cacheset1 on node 1 At this point, I need to make sure that bcache will not serve "old" blocks that were linked to the backing device. My understanding is that as we have attached the backing device to a new cacheset (#2) in-between, this will be "recorded" in the bcache headers and all the blocks that used to be valid in the first place won't be served. Can you please validate if this is safe or if we need to take special care about invalidating the original cacheset ? Thanks a lot, - Patrick -
Attachment:
smime.p7s
Description: S/MIME cryptographic signature