On 11/15/2010 01:46 PM, Anselm Strauss wrote: > But you can not remove a brick without interrupting applications that > access data on that brick, right? The documentation says: "Data residing > on the brick that you are removing will no longer be accessible at the > Gluster mount point." So is there a way of doing a "imbalance" so that > data on the brick to be removed will first be allocated in the > background and online onto other bricks? > > What I meant by splitting and merging volumes was motivated by a use > case I often see, that is when you decide that certain data in a folder > needs to be isolated onto a separate volume. This happens e.g. when a > NFS volume needs to be restricted in access only to certain hosts or > when one wants to avoid the interruption of an application by another > one that runs crazy and fills up all space. Often I see that people > first realizing this after the application is already in production or > simply when the requirements change. > For myself I always see this with ZFS. Data sets are really cheap and > easy to do. So I start with one for the whole pool, then later when I > see how the application really works I start to split up and set > different properties (like quota, compression, etc.) for each data set. > But when I do that later on I always have to migrate the data manually > to the new mount point and interrupt the application. > But I guess this is also a problem of the idea of mount points. How do > you move data between mount points without interrupting access to it ... > > > On 11/14/10 03:59, Craig Carl wrote: >> Anselm - >> You can remove a brick online, you can't change the type of an existing volume, if you could explain what you what to do with a 'merge' and a 'split' I could give you a better answer, you can 'split' a volume by moving half the data to another volume and 'merge' data by copying all the data from one volume to another, is that what you want to do? >> >> Parity based storage in a distributed file system is difficult for several reasons, we are currently investigating some possibilities with erasure coding and will keep everyone up to date on our progress. >> >> >> >> Thanks, >> >> Craig >> >> --> >> Craig Carl >> Senior Systems Engineer >> Gluster >> >> >> >> From: "Anselm Strauss"<amsibamsi at gmail.com> >> To: gluster-users at gluster.org >> Sent: Saturday, November 13, 2010 1:56:03 AM >> Subject: Online operations >> >> Hi, >> >> I have done some testing with glusterfs on the localhost. I was >> wondering what all operations you can do online with a glusterfs volume. >> >> Is it possible to remove a brick and shrink the volume without taking >> some data offline? Like a pvmove in Linux LVM that moves all data off a >> disk before you take it offline? >> >> Are the following operations possible to do online? >> >> - Change between mirroring and striping >> - Change the mirror or stripe count >> - Merge two volumes >> - Split a volume into two >> Thanks, >> >> Craig >> >> --> >> Craig Carl >> Senior Systems Engineer >> Gluster >> >> Is there a plan for supporting other redundancy levels that mirror, e.g. >> RAID 5, 6, ...? >> >> Thanks for any ideas, >> Anselm Strauss >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users Anselm - I think you may be looking at older documentation, Gluster 3.1 introduced a migrate command, I think it exactly what you are looking for - http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Migrating_Volumes. I think you identified the problem with 'spliting' a volume, the volume name has to change so there is an application interruption no matter what. Thanks, Craig --> Craig Carl Senior Systems Engineer Gluster