On Thu, Jun 12, 2008 at 2:21 PM, Victor San Pedro <vsanpedro@xxxxxxxxxxx> wrote: > Hello Krishna. > Yes, there goes my spec files. > I'm sending you the client file. Servers don't need change from my point > of view. > > This first is: "glusterfs-client.vol -> 1unify_over_2afr_over_4volumes" > (WORKS FINE but I need balanced readings between servers, the readings > can not focus on one server. I would need all the servers working. Our > project is based on customers access for reading files). The reads are load balanced in afrs, in the sense a file gets read from a subvol depending on its inode number. So if lot of files are being read all the servers will be equally loaded. However if a single file is being read all the time, only one subvol will be loaded. > > ############################################ > ############################################ > > # CONFIGURACIÓN DE LOS FICHEROS DE DATOS. > > volume vshedir1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.101 > option remote-subvolume vshedir1 > end-volume > > volume vplutus1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.104 > option remote-subvolume vplutus1 > end-volume > > volume vthitus1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.102 > option remote-subvolume vthitus1 > end-volume > > volume vlagafh1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.103 > option remote-subvolume vlagafh1 > end-volume > > ########################################### > ########################################### > > # CONFIGURACIÓN DE LOS VOLUMENES PARA NS > > > volume vshedir_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.101 > option remote-subvolume vshedir_ns > end-volume > > volume vplutus_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.104 > option remote-subvolume vplutus_ns > end-volume > > volume vthitus_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.102 > option remote-subvolume vthitus_ns > end-volume > > volume vlagafh_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.103 > option remote-subvolume vlagafh_ns > end-volume > > > ########################################### > ########################################### > > # CONFIGURACION DE LOS ESPEJOS PARA DATOS > > volume afr1 > type cluster/afr > subvolumes vshedir1 vthitus1 > end-volume > > volume afr2 > type cluster/afr > subvolumes vplutus1 vlagafh1 > end-volume > > > ########################################### > ########################################### > > # CONFIGURACION DEL VOLUMEN DE NOMBRES > > volume afr_ns > type cluster/afr > subvolumes vshedir_ns vthitus_ns vplutus_ns vlagafh_ns > end-volume > > ########################################### > ########################################### > > # CONFIGURACION DEL VOLUMEN DE UNIFICACION > > volume unify > type cluster/unify > option namespace afr_ns > option scheduler rr > subvolumes afr1 afr2 > end-volume > > ########################################### > ########################################### > > # CONFIGURACION DEL BOOSTER > > volume booster > type performance/booster > option transport-type tcp > subvolumes unify > end-volume > > > > This other is an step beyond the first one, trying striping but keeping > the last configuration: > This second is: "glusterfs-client.vol -> > 1unify_over_2afr_over_striping_over_4volumes" (DO NOT WORK PROPERLY WITH > THE FOLLOWING SPEC) > > ############################################ > ############################################ > # CONFIGURACIÓN DE LOS FICHEROS DE DATOS. > > volume vshedir1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.101 > option remote-subvolume vshedir1 > end-volume > > volume vplutus1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.104 > option remote-subvolume vplutus1 > end-volume > > volume vthitus1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.102 > option remote-subvolume vthitus1 > end-volume > > volume vlagafh1 > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.103 > option remote-subvolume vlagafh1 > end-volume > > ########################################### > ########################################### > # CONFIGURACION DE LOS VOLUMENES PARA NS > > volume vshedir_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.101 > option remote-subvolume vshedir_ns > end-volume > > volume vplutus_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.104 > option remote-subvolume vplutus_ns > end-volume > > volume vthitus_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.102 > option remote-subvolume vthitus_ns > end-volume > > volume vlagafh_ns > type protocol/client > option transport-type tcp/client > option remote-host 192.168.100.103 > option remote-subvolume vlagafh_ns > end-volume > > ########################################### > ########################################### > # CONFIGURACION DEL VOLUMEN DE NOMBRES > > volume afr_ns > > type cluster/afr > subvolumes vshedir_ns vthitus_ns vplutus_ns vlagafh_ns > > end-volume > > ########################################### > ########################################### > # ESPEJOS DE DATOS > > volume afr1 > type cluster/afr > subvolumes vshedir1 vthitus1 > end-volume > > volume afr2 > type cluster/afr > subvolumes vplutus1 vlagafh1 > end-volume > > ########################################### > ########################################### > # CONFIGURACION DEL LOS VOLUMENES PARA STPNG > > *volume striping0 > type cluster/stripe > option block-size *:512KB > subvolumes vshedir1 vthitus1 > end-volume > > volume striping1 > type cluster/stripe > option block-size *:512KB > subvolumes vplutus1 vlagafh1 > end-volume* > > ########################################### > ########################################### > # CONFIGURACION DEL VOLUMEN DE UNIFICACION > > volume unify > type cluster/unify > option namespace afr_ns > option scheduler rr > *subvolumes afr1 afr2* > end-volume > > ########################################### > ########################################### > # CONFIGURACION DEL BOOSTER > > volume booster > type performance/booster > option transport-type tcp > subvolumes unify > end-volume > This spec is not OK in the sense stripe vols are not being used, its just declared, it should have been subvol of another volume. > > If I use this last configuration, I obtain permission errors when I try > to read the files in gluster, although the files seem to be written > properly mirrored on servers. This 2nd config would be similar to your 1st config, i.e unify over afr as the stripe vols are not being used. So are you sure that this is the spec file that is giving errors? > > If I use this last configuration, but I change in the section > "volume unify" the afr subvolumes for the striping ones "striping0" > and "striping1", I can read the files properly, but it seems that the > writting is completed in the striping way, and the files on servers are > not properly mirrored. > > I think that this last configuration has not sense because the striping > volumes or the afrs (deppending on which one I'm using with the unify) > remain as isolated, but I do not know how could I use all together. > > We are planning to have thousands of readings in a few minutes, so we > need to balance that readings between all the servers. For that reason I > would like to use striping (I understood that striping is read balance > in glusterfs, but I could be wrong or missunderstood this concept). > But, at the same time, I would need that if a server fails the service > would keep on running. > > Can I managed this goal with glusterfs? I think yes, but I could not > make the correct configuration yet. For that reason I need your help... When you use stripe, the reads will be load balanced, in the sense a stripe decides on the subvol to read from based on the offset of the read call. So when a big file is read, all the subvols are used. When you use stripe and afr using 4 storage/posix vols, you wont be able to use unify. it has to be - stripe over 2 afrs, each afr over 2 storage/posix (or protocol/cient) Please mail back in case you have doubts. Krishna > > Thank you. > Victor. > > > > Krishna Srinivas escribió: >> Victor, >> Can you paste spec files? and which version are you using? >> Krishna >> >> On Wed, Jun 11, 2008 at 1:54 PM, Victor San Pedro <vsanpedro@xxxxxxxxxxx> wrote: >> >>> Hello. >>> >>> Finally I managed to obtain good time results with my old computers with >>> the booster volume in "unify" over "afr"... >>> It was important for me to obtain these sort of result with old machines... >>> >>> Problems came later. >>> >>> "A" - If I make "unify" over "afr" and afterwards "striping" when I want >>> to read a file I obtain a read error by permissions. >>> I have checked the permissions and are ok. What could be wrong with this >>> configuration? >>> Please, have you seen configuration "A" working properly at your tests? >>> >>> "B" - If I make "unify" over "striping" and afterwards "afr", I can read >>> the files, but the storage (because of the final striping) of the files >>> seems not redundant, therefor not high availability... >>> >>> My configuration >>> >>> # >>> # server1------- --- >>> # | str1&afr0 | >>> # server2 ---------- | >>> # | | | Unify >>> # server3 ------ |str2&afr1 | >>> # | | >>> # server4 ---------- --- >>> >>> >>> Do you know how can I obtain high availability with the following >>> prerequisites? >>> 1) I need to read the files balanced. (2MB from one server, afterwards >>> other 2MB from other server and so on) -> Striping >>> 2) Writting of files have to be balanced to my "afr" volumes -> Unify >>> over afr >>> >>> Thank you very much. >>> Víctor. >>> >>> >>> _______________________________________________ >>> Gluster-devel mailing list >>> Gluster-devel@xxxxxxxxxx >>> http://lists.nongnu.org/mailman/listinfo/gluster-devel >>> >> >> >> > >