> So . . . about that new functionality. The core idea of data > classification is to apply step 6c repeatedly, with variants of DHT that > do tiering or various other kinds of intelligent placement instead of > the hash-based random placement we do now. "NUFA" and "switch" are > already examples of this. In fact, their needs drove some of the code > structure that makes data classification (DC) possible. > > The trickiest question with DC has always been how the user specifies > these complex placement policies, which we then turn into volfiles. In > the interests of maximizing compatibility with existing scripts and user > habits, what I propose is that we do this by allowing the user to > combine existing volumes into a new higher-level volume. This is I like the idea for its simplicity. Abstracting 'tiers' as volumes is natural from manageability point of view for the following reason. Tiering is about partitioning resources to data based on their needs. This is done by moving data to its most suited resource (for e.g, kind of disk). Secondary volumes approach allow us to partition the resources and their management together. > > (D) Secondary volumes may not be started and stopped by the user. > Instead, a secondary volume is automatically started or stopped along > with its primary. Wouldn't it help in some cases to have secondary volumes running while primary is not running? Some form of maintenance activity. > > (E) The user must specify an explicit option to see the status of > secondary volumes. Without this option, secondary volumes are hidden > and status for their constituent bricks will be shown as though they > were (directly) part of the corresponding primary volume. > > As it turns out, most of the "extra" volfiles in step 8 above also > have their own steps 6d and 7, so implementing step C will probably make > those paths simpler as well. > > The one big remaining question is how this will work in terms of > detecting and responding to volume configuration changes. Currently we > treat each volfile as a completely independent entity, and just compare > whole graphs. Instead, what we need to do is track dependencies between > graphs (a graph of graphs?) so that a change to a secondary volume will > "ripple up" to its primary where a new graph can be generated and > compared to its predecessor. IIUC, (E) describes that primary volume file would be generated with all secondary volume references resolved. Wouldn't that preclude the possibility of the respective processes discovering the dependencies? _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel