Re: blueprint: consistency groups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a question regarding implementation of the 'add image to
consistency group' operation.
Since it's a multi object operation while I'm adding an image
reference to a consistency group the image itself can be deleted.
By the time I start adding consistency group reference to the image,
the image will be gone.
So, I have to keep track of my current state of the operation in the
consistency group.
However if I the librbd client looses connection to the ceph cluster
it all will end up in this partially updated state.
It's not clear to who is responsible for picking up this state and
cleaning it all up.

Should it be a special operation that allows us to check consistency
of all consistency groups?
Is there any agent that is supposed to verify consistency of all rbd
objects regularly?

Please advise.

Thanks in advance,
Victor.

P.S. Here is a link to the ether pad with the updated blue print:
http://pad.ceph.com/p/consistency_groups

On Tue, Mar 29, 2016 at 10:54 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
> On Fri, Mar 25, 2016 at 12:19 AM, Mykola Golub <mgolub@xxxxxxxxxxxx> wrote:
>> On Thu, Mar 24, 2016 at 03:38:25PM -0700, Gregory Farnum wrote:
>>
>>> Just to be clear, you know that you can use the same snapid for all
>>> the images, right? (...although not if you're trying to allow CGs
>>> across pools. Those will be a little harder.)
>>
>> I think not allowing CGs across pools would be annoying limitation for
>> users. If a VM has several volumes for different purposes (e.g. small
>> but fast volume for applicatin data, and large but slow one for
>> backups) it is logical to have different pools for those.
>
> This isn't impossible either. We do it with cache pools and backing
> pools sharing a set of snapids, for instance. But the default rules
> won't work, since a snapid is just a 64-bit counter and they're
> per-pool rather than cluster-global.
> -Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux