On Fri, 16 May 2008, Derek Price wrote:
gordan@xxxxxxxxxx wrote:
Isn't that effectively the same thing? Unless there is quorum, DLM locks
out the entire FS (it also does this when a node dies, until it gets
definitive confirmation that it has been successfully fenced). For normal
file I/O all nodes in the cluster have to acknowledge a lock before it can
be granted.
Why? It requires a meta-data cache, but as long as every node in the quorum
stores a given file's most recent revision # when any lock is granted, even
if it doesn't actually sync the file data, then any quorum should be able to
agree on what the version number of the most up-to-date copy of a file is.
All nodes are required to report only if you assume that any given file has a
small number of "owners" and that the querier doesn't know who the owner is.
That's to do with file versioning, not locking, though. What am I missing?
To remain fault tolerant, this requires that servers make some effort to stay
up-to-date with the meta-data cache, but maybe this could be dealt with
efficiently with the DHT someone else brought up?
I'm not sure that so much metadata caching is actually necessary. If a
file open brings the file to the local machine (this cannot be guaranteed
because the local machine may be out of space, and it may be unable to
free space by expunging an old file due to that file not being redundant
enough in the network), then the metadata of that file, being attached to
the file, is implicitly "cached". But this isn't really caching at all -
it's migration.
The algorithm for opening a file might be as follows:
1) node broadcasts/multicasts an open request to all peers
2) peers that have the file available respond with the metadata
(size, version, etc) they have and possibly their current load (to assist
with load balancing by fetching the file from the least loaded peer)
3.1) if the file is available locally, agree a lock with other nodes, and
use it.
3.2) if the file is not available locally, but there is enough space,
fetch it and do 3.1)
3.3) if there isn't enough space locally to fetch the file, see if enough
space can be freed. If this succeeds, do 3.2)
3.4) if space cannot be freed, use the file remotely from the least loaded
peer.
Expunging algorithm would be similar - broadcast a file status request
(similar to 1) above). If enough nodes respond with the latest version of
the file (set some threshold depending on how much redundancy is
required), the local file can be be removed and the space freed for a file
that is more useful locally. This shouldn't really happen until the local
data store starts to get full.
Locking could be handled somewhat lazily - a lock request gets broadcast
and as long as quorum peers respond, and there are no peers saying "no, I
have that lock!", the lock can be granted. A lock can have TTL (in case a
node dies while holding a lock), and the refresh should be expected if the
node expects to keep the lock. This could be used to speed up locking
(each node would have a list of currently valid locks, without the need to
check explicitly, for example - it would only need to broadcast a
lock-request when it looks like the lock can be granted).
For file delta writes, an AFR type mechanism could be used to send the
deltas to all the nodes that have the file. This could all get quite
tricky, because it might require a separate multicast group to be set up
for up to every node combination subset, in order to keep the network
bandwidth down (or you'd just end up broadcasting to all nodes, which
means things wouldn't scale as switches should, it'd be more like using
hubs).
This would potentially have the problem that there is only 24 bits of
IP multicast address space, but that should provide enough groups with
sensible redundancy levels to cover all node combinations. This may or may
not be way OTT complicated, though. There is probably a simpler and more
sane solution.
Gordan