Re: Fwd: Snapshot feature design review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for responding on this.

As far as snapshot schedules are concerned, I'd recommend that the definition of a schedule is separate from the snapshot and then the snapshot is associated with a schedule. This then enables
- schedules to be centralised and used for other functions
- schedules to be reused across volumes
- schedules to be regarded as a "policy" and applied across multiple clusters, potentially by RHS-C - driving site standards and consistency.

Cheers,

PC

----- Original Message -----
> From: "Nagaprasad Sathyanarayana" <nsathyan@xxxxxxxxxx>
> To: "Fred van Zwieten" <fvzwieten@xxxxxxxxxxxxx>, "Paul Cuzner" <pcuzner@xxxxxxxxxx>
> Cc: "Shishir Gowda" <sgowda@xxxxxxxxxx>, "Anand Subramanian" <ansubram@xxxxxxxxxx>
> Sent: Tuesday, 29 October, 2013 6:15:19 AM
> Subject: Re: Fwd: Snapshot feature design review
> 
> Hi Paul, Fred,
> 
> Thank you for providing valuable inputs. We shall certainly go through these
> and update you further.
> 
> Regards
> Nagaprasad
> 
> 
> > On 28-Oct-2013, at 12:56 pm, Fred van Zwieten <fvzwieten@xxxxxxxxxxxxx>
> > wrote:
> > 
> > Hi,
> > 
> > I have almost the same things as Paul mentioned. I also would like to see a
> > snap retention feature. This could be build into the scheduling mechanism.
> > Something like this:
> > 
> > gluster snapshot create < vol-name > [-n snap-name][-d description][-s
> > <name>:<start-datetime>:<delta-datetime>:<keep> ...]
> > 
> > Where:
> > <name> is the name of this schedule
> > <start-datetime> is the timestamp for the first snapshot
> > <delta-datetime> is the specification for the time between snapshots
> > <keep> is the specification for the nr of snapshots for this schedule to
> > keep
> > 
> > Multiple schedules should be possible.
> > 
> > Another thing, concerning the space management of snapshots. There should
> > be an absolute max size limit on volume plus all of it's snap's. Look at
> > NetApp's implementation for inspiration.
> > 
> > Cheers,
> > 
> > Fred
> > 
> > 
> >> On Mon, Oct 28, 2013 at 4:26 AM, Paul Cuzner <pcuzner@xxxxxxxxxx> wrote:
> >> 
> >> Hi,
> >> 
> >> I've just reviewed the doc, and would like to clarify a couple of things
> >> regarding the proposed design.
> >> 
> >> 
> >> - I don't see a snapshot schedule type command to generate automated
> >> snapshots. What's the plan here? In a distributed environment the
> >> schedule for snapshots should be an attribute of the volume shouldn't it?
> >> If we designate a node in the cluster as 'master' and use cron to manage
> >> the snaps - what happens when this node is down/rebuilt or loses its
> >> config? To me there seems to be a requirement for a gluster scheduler -
> >> to manage snapshots, and potentially future tasks like post dedupe, data
> >> integrity checking or maybe even geo-rep intervals etc.
> >> 
> >> - snapshots are reliant upon dm-thinp, which means this version of lvm is
> >> a dependancy. Is there a clear path of migrating from classic lvm to
> >> dm-thinp based lv's - or is snapshots in 3.5 basically going to be a
> >> feature from this point forward i.e. no backwards compatibility.
> >> 
> >> - when managing volumes holding snaps, visibility of capacity usage
> >> attributed to snaps is key - but I don't see a means of discerning the
> >> space usage by snap in the CLI breakdown.
> >> 
> >> - on other systems, I've had hung backup tasks (for days!) holding on to
> >> snaps causing space usage to climb against the primary volume. In this
> >> scenario I was able to see snap usage and what client had the snapshot
> >> open to troubleshoot. In this scenario, how will the glusterfs snapshot
> >> present itself and be managed.
> >> 
> >> - How will the snapshot volume be perceived by Windows clients over SMB?
> >> Will these users be able to use the previous versions tab for example
> >> against the file properties in explorer?
> >> 
> >> - a volume snapshot is based on snaps of the component bricks. 3.4 changed
> >> the way that bricks are used on a vol create to require a dir on a
> >> filesystem not the filesystem itself. This change enables users to create
> >> multiple volumes from the same physical brick by placing different dirs
> >> on the bricks root - which is not necessarily a good idea. Given the 1:1
> >> requirement of brick:volume, will this CLI behaviour be regressed to the
> >> way it was with 3.3.
> >> 
> >> Happy to talk further about any of the above, if needed.
> >> 
> >> Regards,
> >> 
> >> Paul Cuzner
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> ----- Original Message -----
> >> > From: "Nagaprasad Sathyanarayana" <nsathyan@xxxxxxxxxx>
> >> > To: gluster-devel@xxxxxxxxxx
> >> > Sent: Friday, 18 October, 2013 5:22:31 AM
> >> > Subject: Fwd: Snapshot feature design review
> >> >
> >> > Gluster devel included.
> >> >
> >> > Thanks
> >> > Naga
> >> >
> >> > Begin forwarded message:
> >> >
> >> >
> >> >
> >> >
> >> > From: Nagaprasad Sathyanarayana < nsathyan@xxxxxxxxxx >
> >> > Date: 17 October 2013 9:45:05 pm IST
> >> > To: Shishir Gowda < sgowda@xxxxxxxxxx >
> >> > Cc: " anands@xxxxxxxxxx " < anands@xxxxxxxxxx >, " rfortier@xxxxxxxxxx "
> >> > <
> >> > rfortier@xxxxxxxxxx >, " ssaha@xxxxxxxxxx " < ssaha@xxxxxxxxxx >, "
> >> > aavati@xxxxxxxxxx " < aavati@xxxxxxxxxx >, " atumball@xxxxxxxxxx " <
> >> > atumball@xxxxxxxxxx >, " vbellur@xxxxxxxxxx " < vbellur@xxxxxxxxxx >, "
> >> > vraman@xxxxxxxxxx " < vraman@xxxxxxxxxx >, " lpabon@xxxxxxxxxx " <
> >> > lpabon@xxxxxxxxxx >, " kkeithle@xxxxxxxxxx " < kkeithle@xxxxxxxxxx >, "
> >> > jdarcy@xxxxxxxxxx " < jdarcy@xxxxxxxxxx >, " gluster-devel@xxxxxxxxxx "
> >> > <
> >> > gluster-devel@xxxxxxxxxx >
> >> > Subject: Re: Snapshot feature design review
> >> >
> >> >
> >> >
> >> >
> >> > + Gluster devel.
> >> >
> >> > Hi all,
> >> >
> >> > Kindly review the design and provide any comments by next week. We are
> >> > targeting to have the review comments incorporated in the design and
> >> > post
> >> > the final design by 28th of this month (October). If you need any
> >> > discussion
> >> > on the design, please let us know by 21st or 22nd this month.
> >> > If anybody not copied must be involved in design review, please feel
> >> > free to
> >> > forward the design document to them.
> >> >
> >> > Thanks
> >> > Naga
> >> >
> >> >
> >> >
> >> > On 16-Oct-2013, at 7:03 pm, Shishir Gowda < sgowda@xxxxxxxxxx > wrote:
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Hi All,
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > The design document has been updated, and we have tried to address all
> >> > the
> >> > review comments and design issues to the best of our ability.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Please review the design and the document when possible.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > The design document can be found @
> >> > https://forge.gluster.org/snapshot/pages/Home
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Please feel free to critique/comment.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > With regards,
> >> >
> >> >
> >> > Shishir
> >> >
> >> > _______________________________________________
> >> > Gluster-devel mailing list
> >> > Gluster-devel@xxxxxxxxxx
> >> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> >> >
> >> 
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel@xxxxxxxxxx
> >> https://lists.nongnu.org/mailman/listinfo/gluster-devel
> > 
> 



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux