Re: [PATCH] reinstate ceph cluster_snap support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Dec 18, 2013, Gregory Farnum <greg@xxxxxxxxxxx> wrote:

> (you've never been getting a real point-in-time
> snapshot, although as long as you didn't use external communication
> channels you could at least be sure it contained a causal cut).

I never expected more than a causal cut, really (my wife got a PhD in
consistent checkpointing of distributed systems, so my expectations may
be somewhat better informed than those a random user might have ;-),
although even now I've seldom got a snapshot in which the osd data
differs across replicas (I actually check for that; that's part of my
reason for taking the snapshots in the first place), even when I fail to
explicitly make the cluster quiescent.  But that's probably just “luck”,
as my cluster usually isn't busy when I take such snapshots ;-)

> And of course that's nothing compared to snapshotting the monitors, as
> you've noticed

I've given it some more thought, and it occurred to me that, if we make
mons take the snapshot when the snapshot-taking request is committed to
the cluster history, we should have the snapshots taking at the right
time and without the need for rolling them back and taking them again.

The idea is that, if the snapshot-taking is committed, eventually we'll
have a quorum carrying that commit, and thus each of the quorum members
will have taken a snapshot as soon as they got that commit, even if they
did so during recovery, or if they took so long to take the snapshot
that they got kicked out of the quorum for a while.  If they get
actually restarted, they will get the commit again and take the snapshot
from the beginning.  If all mons in the quorum that accepted the commit
get restarted so that none of them actually records the commit request,
and it doesn't get propagated to other mons that attempt to rejoin,
well, it's as if the request had never been committed.  OTOH, if it did
get to other mons, or if any of them survives, the committed request
will make to a quorum and eventually to all monitors, each one taking
its snapshot at the time it gets the commit.

This should work as long as all mons get recovery info in the same
order, i.e., they won't get into their database history information that
happens-after the snapshot commit before the snapshot commit, nor will
they fail to get information that happened-before the snapshot commit
before getting the snapshot commit.  That said, having little idea of
the inner workings of the monitors, I can't tell whether they actually
meet this “as long as” condition ;-(

> — but making it actually be a cluster snapshot (instead
> of something you could basically do by taking a btrfs snapshot
> yourself)

Taking btrfs snapshots manually over several osds on several hosts is
hardly a way to get a causal cut (but you already knew that ;-)

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux