Hi,
On Wednesday 07 May 2014 10:52 PM, Jeff Darcy wrote:
Attached is a basic write-up of the user-serviceable snapshot feature
design (Avati's). Please take a look and let us know if you have
questions of any sort...
A few.
The design creates a new type of daemon: snapview-server.
* Where is it started? One server (selected how) or all?
All the servers in the cluster.
* How do clients find it? Are we dynamically changing the client
side graph to add new protocol/client instances pointing to new
snapview-servers, or is snapview-client using RPC directly? Are
the snapview-server ports managed through the glusterd portmapper
interface, or patched in some other way?
Adding a protocol/client instance to connect to protocol/server at the
daemon.
so, the call flow would look like,
(if the call is to .snaps) snapview-client -> protocol/client ->
protocol/server ->snapview-server.
Yes, it is handled through glusterd portmapper.
* Since a snap volume will refer to multiple bricks, we'll need
more brick daemons as well. How are *those* managed?
Brick processes associated with the snapshot will be started.
- Varun Shastry
* How does snapview-server manage user credentials for connecting
to snap bricks? What if multiple users try to use the same
snapshot at the same time? How does any of this interact with
on-wire or on-disk encryption?
I'm sure I'll come up with more later. Also, next time it might
be nice to use the upstream feature proposal template *as it was
designed* to make sure that questions like these get addressed
where the whole community can participate in a timely fashion.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel