On 04/30/2014 12:03 PM, Paul Cuzner
wrote:
From:
"Sahina Bose" <sabose@xxxxxxxxxx>
To: "Paul Cuzner" <pcuzner@xxxxxxxxxx>, "Rajesh
Joseph" <rjoseph@xxxxxxxxxx>
Cc: gluster-devel@xxxxxxxxxxx
Sent: Wednesday, 30 April, 2014 6:06:44 PM
Subject: Re: Snapshot CLI question
On 04/30/2014 07:14 AM, Paul
Cuzner wrote:
I guess my point about loss of data following a
restore boils down to the change of brick names that the
restore process "forces". I may be wrong but doesn't
this mean that any "downstream" monitoring/scripts have
to adjust their idea of what the volume looked like - it
just sounds like more work for the admin to me...
From a Nagios monitoring view, we have an auto discovery
plugin that scans the volumes for any change. So once a volume
snapshot is restored, it will be detected as though new bricks
have been added to it (due to the new brick names) with the
older bricks deleted.
Makes sense
What we're currently missing is a mechanism to know that the
change in volume happened due to a snapshot restore.
With regards to monitoring history, we plan to keep the
previous configuration as backup along with any performance
data that was collected on them.
Sounds good. From an admins standpoint, following a restore
operation will a performance graph against the volume show
data points before and after the restore - or will the
previous data be kept but not immediately usable?
Yes. The performance graph for the volume should show the volume
capacity metrics before and after restore (Need to test this out)
The brick performance graphs will however be separate with the old
bricks' data being kept as backup.
The other thing that wasn't clear from the video, was
the impact to fstab. What happens here in relation to
the brick name change?
From:
"Rajesh Joseph" <rjoseph@xxxxxxxxxx>
To: "Paul Cuzner" <pcuzner@xxxxxxxxxx>
Cc: "gluster-devel" <gluster-devel@xxxxxxxxxx>,
"Sahina Bose" <sabose@xxxxxxxxxx>
Sent: Tuesday, 29 April, 2014 10:13:00 PM
Subject: Re: Snapshot CLI
question
Hi Paul,
We are thinking of providing policy-driven auto-delete
in future. Where user can provide various policies by
which they can control auto-delete.
e.g. delete oldest, delete with maximum disk
utilization, etc. What you mentioned can also be part of
the policy.
Loss of monitoring history has nothing to do with
dm-thinp. Monitoring tool keeps the history of changes
seen by the brick. Now after the restore
the monitoring tool has no ways to map the newer bricks
to the older bricks, therefore they discard the history
of the older bricks.
I am sure monitoring team has plans to evolve this and
fix this.
Best Regards,
Rajesh
----- Original Message -----
From: "Paul Cuzner" <pcuzner@xxxxxxxxxx>
To: "Rajesh Joseph" <rjoseph@xxxxxxxxxx>
Cc: "gluster-devel" <gluster-devel@xxxxxxxxxx>,
"Sahina Bose" <sabose@xxxxxxxxxx>
Sent: Tuesday, April 29, 2014 5:22:42 AM
Subject: Re: Snapshot CLI question
No worries, Rajesh.
Without --xml we're limiting the automation potential
and resorting to 'screen scraping' - so with that said,
is --xml in plan, or do you need an RFE?
Brickpath's changing and loss of history presents an
interesting problem for monitoring and capacity planning
- especially if the data is lost! As an admin this would
be a real concern. Is this something that will evolve,
or is this just the flipside of using dm-thinp as the
provider for the volume/snapshots?
The other question I raised was around triggers for
autodelete. Having a set number of snapshots is fine,
but I've seen environments in the past where autodelete
was needed to protect the pool when snapshots deltas
were large - i.e. autodelete gets triggered at a
thinpool freespace threshold.
Is this last item in the plan? Does it make sense? Does
it need an RFE?
Cheers,
PC
----- Original Message -----
> From: "Rajesh Joseph" <rjoseph@xxxxxxxxxx>
> To: "Paul Cuzner" <pcuzner@xxxxxxxxxx>
> Cc: "gluster-devel" <gluster-devel@xxxxxxxxxx>,
"Sahina Bose"
> <sabose@xxxxxxxxxx>
> Sent: Monday, 28 April, 2014 9:47:04 PM
> Subject: Re: Snapshot CLI question
> Sorry Paul for this late reply.
> As of now we are not supporting --xml option.
> And restore does change the brick path. Users using
their own monitoring
> script need to be aware of this scenario.
> RHS monitoring will monitor the new bricks once
restored, but the history
> related to older brick might be lost.
> Sahin: Would you like to comment on the monitoring
part of the question?
> Thanks & Regards,
> Rajesh
> ----- Original Message -----
> From: "Paul Cuzner" <pcuzner@xxxxxxxxxx>
> To: "gluster-devel" <gluster-devel@xxxxxxxxxx>
> Sent: Wednesday, April 23, 2014 5:14:44 AM
> Subject: Snapshot CLI question
> Hi,
> having seen some of the demo's/material around the
snapshot cli, it raised a
> couple of questions;
> Will --xml be supported?
> The other question I have relates to brick names.
On a demo video I saw the
> brick names change following a 'restore' operation
(i.e. vol info shows
> different paths - pointing to the paths associated
with the snapshot.)
> Is this the case currently, and if so does this
pose a problem for
> monitoring?
> Cheers,
> Paul C
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
|