Re: [Gluster-devel] Regarding Glusterfs file locking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We also have not the budget for full 3-way replication, so we're using "replica 3 arbiter 1", that means the data is replicated over 2 nodes and the third just stores metadata. You have to manually distribute data across the 3 nodes so that a single node never hosts 2 copies of neither data nor metadata. We use a rotation scheme:

Brick clustor00:/srv/bricks/00/d 49152 0 Y 2506 Brick clustor01:/srv/bricks/00/d 49152 0 Y 2569516 Brick clustor02:/srv/quorum/00/d 49152 0 Y 296886 Brick clustor02:/srv/bricks/00/d 49152 0 Y 296886 Brick clustor00:/srv/bricks/01/d 49152 0 Y 2506 Brick clustor01:/srv/quorum/00/d 49152 0 Y 2569516 Brick clustor01:/srv/bricks/01/d 49152 0 Y 2569516 Brick clustor02:/srv/bricks/01/d 49152 0 Y 296886 Brick clustor00:/srv/quorum/00/d 49152 0 Y 2506

We currently have 30 bricks + 15 quorums per node, but seems it's a bit too much even with 192G RAM...

HIH
Diego

Il 03/02/2023 11:39, Maaz Sheikh ha scritto:
Hi,
Greetings of the day,

We checked in GlusterFS documentation for two way replication on three storage devices(nodes). Please provide any solution for this. We did not find any straight forward information for this scenario.

As per documentation three storage devices(nodes) will work on three way replication which does not match our scaling requirement.


Any help is highly appreciated.


Thanks,
Maaz Sheikh
------------------------------------------------------------------------
*From:* Strahil Nikolov <hunter86_bg@xxxxxxxxx>
*Sent:* Friday, February 3, 2023 4:15 AM
*To:* gluster-devel@xxxxxxxxxxx <gluster-devel@xxxxxxxxxxx>; gluster-users@xxxxxxxxxxx <gluster-users@xxxxxxxxxxx>; Maaz Sheikh <maaz.sheikh@xxxxxxxxxxx> *Cc:* Rahul Kumar Sharma <rrsharma@xxxxxxxxxxx>; Sweta Dwivedi <sweta.dwivedi@xxxxxxxxxxx>; Pushpendra Garg <pushpendra.garg@xxxxxxxxxxx>
*Subject:* Re: [Gluster-devel] Regarding Glusterfs file locking
*CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.


*
As far as I remember there are only 2 types of locking in Linux:
- Advisory
- Mandatory

In order to use mandatory locking, you need to pass the "mand" mount option to the FUSE client(mount -o mand,<my other mount options> ...) and chmod g+s,g-x /<FUSE PATH>/<Target file>


Best Regards,
Strahil Nikolov
В сряда, 1 февруари 2023 г., 13:22:59 ч. Гринуич+2, Maaz Sheikh <maaz.sheikh@xxxxxxxxxxx> написа:


Team, please let us know if u have any feedback.
------------------------------------------------------------------------
*From:* Maaz Sheikh
*Sent:* Wednesday, January 25, 2023 4:51 PM
*To:* gluster-devel@xxxxxxxxxxx <gluster-devel@xxxxxxxxxxx>; gluster-users@xxxxxxxxxxx <gluster-users@xxxxxxxxxxx>
*Subject:* Regarding Glusterfs file locking
Hi,
Greetings of the day,

*Our configuration is like:*
We have installed both glusterFS server and GlusterFS client on node1 as well as node2. We have mounted node1 volume to both nodes.

*Our use case is :*
From glusterFS node 1, we have to take an exclusive lock and open a file (which is a shared file between both the nodes) and we should write/read in that file.
 From glusterFS node 2, we should not be able to read/write that file.

*Now the problem we are facing is:*
From node1, we are able to take an exclusive lock and the program has started writing in that shared file. From node2, we are able to read and write on that file which should not happen because node1 has already acquired the lock on that file.

Therefore, requesting you to please provide us a solution asap.

Thanks,
Maaz Sheikh
Associate Software Engineer
Impetus Technologies India

------------------------------------------------------------------------






NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
-------

Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk <https://secure-web.cisco.com/12AP9444t5x8N516uRFxGjkEcC2YQ2w5_wIbDi2IkmO3m35rqybmiYgFyAtK-OGCmD1aJMbn049ssyoF7dydPkLyasKAjhkOkSyUx5fvCJ6JBVUMX3JeZRS2qSvjqtK7kZE6PuF4WMY8FAGNjumGyQ1DlttwLCKoId5iJwpQyaxGw4I2QWvSNafvqqyObc2zU0dzV1Ayh_grbU1hNngsJyI-3exNeJhKA5v863C7dEOzDbTnq79LuyEIdfUUwQf9jE0fiUeKZ1sAOleH0kdeB9ZtNwrSLmRf_Q0YvxU45oceMyVrKHzWbE-6xxIAtL2nC/https%3A%2F%2Fmeet.google.com%2Fcpu-eiue-hvk>

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
https://lists.gluster.org/mailman/listinfo/gluster-devel <https://secure-web.cisco.com/1f6dE-u697W7bHpXDrPamIzo6i0_BqyZw21v6MqByaqQXxNXfIu_8nDGQD8EEStnhIl-Z9rpRbcbOmmg9ZOkU1ATnFJWyzPFNRdREsAw2g-BW2quWfglxYjdcUYrf63ntrYgrg8ZEDOgMzp8pV0psisEjmHR57IuTgPjs7iZWes9nG_yBsP6yBmLPtWSKfIGj4Diu01fwJfIG3EKXlE4xtia9TqEAj7nTcAMx1_dqKyjCgDU7ZhN-S8XQ9RWlp7OVKQ0GEPM-CSJozOXukVWlM00zAGfmPVfQAI_DmCap5bB6BXhAiIB9LXqWWDi8nrR5/https%3A%2F%2Flists.gluster.org%2Fmailman%2Flistinfo%2Fgluster-devel>


------------------------------------------------------------------------






NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux