Thanks Joe, >> Isn't there a 1:1 relationship between brick and server? > In my configuration, 1 server has 4 drives (well, 5, but one's the OS). > Each drive has one gpt partition. I create an lvm volume group that > holds all four huge partitions. For any one GlusterFS volume I create 4 > lvm logical volumes: > > lvcreate -n a_vmimages clustervg /dev/sda1 > lvcreate -n b_vmimages clustervg /dev/sdb1 > lvcreate -n c_vmimages clustervg /dev/sdc1 > lvcreate -n d_vmimages clustervg /dev/sdd1 > > then format them xfs and (I) mount them under > /data/glusterfs/vmimages/{a,b,c,d}. These four lvm partitions are bricks > for the new GlusterFS volume. Followed, actually, going to redo it this way, but will or a RAID instead of individual drive. Thanks > > As glusterbot would say if asked for the glossary: >> A "server" hosts "bricks" (ie. server1:/foo) which belong to a >> "volume" which is accessed from a "client". > Yes, checked the manual glossary and its well explained. Had yet to read those last pages > My volume would then look like > gluster volume create replica 3 > server{1,2,3}:/data/glusterfs/vmimages/a/brick > server{1,2,3}:/data/glusterfs/vmimages/b/brick > server{1,2,3}:/data/glusterfs/vmimages/c/brick > server{1,2,3}:/data/glusterfs/vmimages/d/brick >>> Each vm image is only 6 gig, enough for the operating system and >>> applications and is hosted on one volume. The data for each application >>> is hosted on its own GlusterFS volume. >> Hmm, petty good idea, especially security wise. Means one VM can not >> mess with another vm files. Is it possible to extend gluster volume >> without destroying and recreating it with bigger peer storage setting > I can do that two ways. I can add servers with storage and then > add-brick to expand, or I can resize the lvm partitions and grow xfs > (which I have done live several times). Will be going with lvm, now that I understand what is a brick > >>> For mysql, I set up my innodb store to use 4 files (I don't do 1 file >>> per table), each file distributes to each of the 4 replica subvolumes. >>> This balances the load pretty nicely. > It's not so much a "how glusterfs works" question as much as it is a how > innodb works question. By configuring the innodb_data_file_path to start > with a multiple of your bricks (and carefully choosing some filenames to > ensure they're distributed evenly), records seem to be (and I only have > tested this through actual use and have no idea if this is how it's > supposed to work) accessed evenly over the distribute set. > Hmm, have you checked on the gluster servers that these four files are in separate bricks? As far as I understand, if you have not done anything Glusterfs scheduler (Default ALU on version 3.3), it is likely that is not whats happening. Or you are using a version that has a different scheduler. Interesting though. Poke around and update us please Thanks William