On Sun, Sep 21, 2008 at 1:46 AM, Paolo Supino <paolo.supino at gmail.com> wrote: > Hi Krishna > > I have all intentions of mounting the toaster's filesystems on the > head node using iSCSI (see my original post), but I have a problem: I > have 35 servers in the gluster filesystem that the researchers don't see > directly: the head node hides the private network with > PAT/PNAT/Masquerading (pick your favorite acronym) so the clients only > see the head node but not all the gluster filesystem servers behind it. > The glusterfs clients get a wrong image of the gluster filesystem ... I > could simply remove the PAT/PNAT/Masquerading (pick your favorite > acronym), but I'd rather not do that because that adds an overhead in > systems administration and breaks the rule of KISS. > Ah OK, you have 35 storage servers apart from toaster. If I understand you correctly, you are planing to run glusterfs client on head node and re-export this mount point to the researchers' nodes? If yes, you could setup port forwarding on head node and avoid re-exporting completely so that researchers' nodes access the storage nodes directly. Regards Krishna > > > -- > TIA > Paolo > > > > Krishna Srinivas wrote: >> Paolo, >> >> You could mount toaster's partition on head node using iscsi. >> Run glusterfs server on head node exporting the two partitions. >> Run glusterfs client on the researcher's nodes. >> >> Krishna >> >> 2008/9/18 Paolo Supino <paolo.supino at gmail.com>: >>> Hi >>> >>> now that I have a new shiney parallel filesystem :-) I want to take >>> it a step forward (the fun never ends ;-) ) ... >>> A few words on my HPC cluster: >>> 1. The private network between the compute nodes, head and toaster >>> (Netapp FAS 2020) is Gigabit Ethernet. >>> 2. The toaster exports 2.1 and 5.1 TB volumes served over NFSv3 (ouch..) >>> 3. Only the head node is multi homed and connected to the faculty >>> network, where the researchers are ... >>> >>> What I thought of doing: >>> 1. Re export the toaster using iSCSI. >>> 2. Mount the iSCSI exports on the head and add them to the gluster >>> volume. This is pretty straight forward :-) and voil? I have a uniform >>> 9.3TB volume ... >>> 3. The last part is the tricky part that I still have to figure out: >>> have the researchers be able to be gluster clients of this volume >>> without exposing the private network to the faculty network (I don't >>> want to NFS export it) >>> >>> >>> >>> >>> -- >>> Paolo >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >>> > >