Re: Help: gluster-block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ adding +gluster-users for archive purpose ]

On Sat, Mar 23, 2019 at 1:51 AM Jeffrey Chin <jeffrey.chin@xxxxxxxxxxxx> wrote:
>
> Hello Mr. Kalever,

Hello Jeffrey,

>
> I am currently working on a project to utilize GlusterFS for VMWare VMs. In our research, we found that utilizing block devices with GlusterFS would be the best approach for our use case (correct me if I am wrong). I saw the gluster utility that you are a contributor for called gluster-block (https://github.com/gluster/gluster-block), and I had a question about the configuration. From what I understand, gluster-block only works on the servers that are serving the gluster volume. Would it be possible to run the gluster-block utility on a client machine that has a gluster volume mounted to it?

Yes, that is right! At the moment gluster-block is coupled with
glusterd for simplicity.
But we have made some changes here [1] to provide a way to specify
server address (volfile-server) which is outside the gluster-blockd
node, please take a look.

Although it is not complete solution, but it should at-least help for
some usecases. Feel free to raise an issue [2] with the details about
your usecase and etc or submit a PR by your self :-)
We never picked it, as we never have a usecase needing separation of
gluster-blockd and glusterd.

>
> I also have another question: how do I make the iSCSI targets persist if all of the gluster nodes were rebooted? It seems like once all of the nodes reboot, I am unable to reconnect to the iSCSI targets created by the gluster-block utility.

do you mean rebooting iscsi initiator ? or gluster-block/gluster
target/server nodes ?

1. for initiator to automatically connect to block devices post
reboot, we need to make below changes in /etc/iscsi/iscsid.conf:
node.startup = automatic

2. if you mean, just in case if all the gluster nodes goes down, on
the initiator all the available HA path's will be down, but we still
want the IO to be queued on the initiator, until one of the path
(gluster node) is availabe:

for this in gluster-block sepcific section of multipath.conf you need
to replace 'no_path_retry 120' as 'no_path_retry queue'
Note: refer README for current multipath.conf setting recommendations.

[1] https://github.com/gluster/gluster-block/pull/161
[2] https://github.com/gluster/gluster-block/issues/new

BRs,
--
Prasanna
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux