On 07/30/2015 01:07 PM, Pranith Kumar Karampuri wrote:
Added folks who work on nfs on gluster. Let's see.
Pranith
On 07/30/2015 09:25 AM, Ryan Clough wrote:
Okay, I think this has to do with Gluster NFS even though I was not
accessing the Gluster volume via NFS.
Directly on the bricks the files look like this:
Hgluster01
-r--r--r--. 2 602 602 832M Apr 3 06:17 scan_89.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_90.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_91.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_92.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_94.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_95.tar.bak
-r--r--r--. 2 602 602 839M Apr 3 11:39 scan_96.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_98.tar.bak
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_99.tar.bak
Hgluster02
---------T 2 nfsnobody nfsnobody 0 Jul 13 11:42 scan_89.tar.bak
-r--r--r--. 2 602 602 869939200 Apr 3 07:36 scan_90.tar.bak
-r--r--r--. 2 602 602 868331520 Apr 3 09:36 scan_91.tar.bak
-r--r--r--. 2 602 602 870092800 Apr 3 09:37 scan_92.tar.bak
-r--r--r--. 2 602 602 875448320 Apr 3 09:39 scan_93.tar.bak
-r--r--r--. 2 602 602 870656000 Apr 3 09:40 scan_94.tar.bak
-r--r--r--. 2 602 602 869396480 Apr 3 11:38 scan_95.tar.bak
-r--r--r--. 2 602 602 881858560 Apr 3 11:40 scan_97.tar.bak
-r--r--r--. 2 602 602 868188160 Apr 3 11:41 scan_98.tar.bak
-r--r--r--. 2 602 602 865382400 Apr 3 13:32 scan_99.tar.bak
So I turned off NFS and from a client tried to move the files to see
if that would get rid of these weird nfsnobody files. Didn't work.
After moving the files to a new directory, the new directory still had
all of the nfsnobody files on the bricks.
This is not an issue with Gluster NFS, but the expected behaviour when
the option 'server.root-squash' is on. To verify that, disable Gluster
NFS, enable this option and try to create a file via Gluster Native
mount. The uid/gid of the file created shall be the value of the options
anonuid/anongid which are by default 65534 (nfsnobody user - can be
checked in 'etc/passwd' file).
Could you please confirm if the option 'server.root-squash' is on for
your volume? If yes, are there any values set for 'server.anonuid' and
'server.anongid'.
Thanks,
Soumya
Next, I used rsync from the same client to copy all of the files to a
new directory and, lo and behold, the nfsnobody files were gone. I
tested a Bareos backup job and the data was read without issue from
both nodes. There was zero empty files.
I guess I will blame Gluster NFS for this one? If you need any more
information from me, I would be happy to oblige.
___________________________________________
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
<http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>
On Wed, Jul 29, 2015 at 5:53 AM, Pranith Kumar Karampuri
<pkarampu@xxxxxxxxxx <mailto:pkarampu@xxxxxxxxxx>> wrote:
hi Ryan,
What do you see in the logs of glusterfs mount and bricks?
Do you think it is possible for you to attach those logs to this
thread so that we can see what could be going on?
Pranith
On 07/28/2015 02:32 AM, Ryan Clough wrote:
Hello,
I have cross-posted this question in the bareos-users mailing list.
Wondering if anyone has tried this because I am unable to backup
data that is mounted via Gluster Fuse or Gluster NFS. Basically,
I have the Gluster volume mounted on the Bareos Director which
also has the tape changer attached.
Here is some information about versions:
Bareos version 14.2.2
Gluster version 3.7.2
Scientific Linux version 6.6
Our Gluster volume consists of two nodes in distribute only. Here
is the configuration of our volume:
[root@hgluster02 ~]# gluster volume info
Volume Name: export_volume
Type: Distribute
Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: hgluster01:/gluster_data
Brick2: hgluster02:/gluster_data
Options Reconfigured:
performance.io-thread-count: 24
server.event-threads: 20
client.event-threads: 4
performance.readdir-ahead: on
features.inode-quota: on
features.quota: on
nfs.disable: off
auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
server.allow-insecure: on
server.root-squash: on
performance.read-ahead: on
features.quota-deem-statfs: on
diagnostics.brick-log-level: WARNING
When I try to backup a directory from Gluster Fuse or Gluster NFS
mount and I monitor the network communication I only see data
being pulled from the hgluster01 brick. When the job finishes
Bareos thinks that it completed without error but included in the
messages for the job are lots and lots of permission denied
errors like this:
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot open
"/export/rclough/psdv-2014-archives-2/scan_111.tar.bak":
ERR=Permission denied.
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot open
"/export/rclough/psdv-2014-archives-2/run_219.tar.bak":
ERR=Permission denied.
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot open
"/export/rclough/psdv-2014-archives-2/scan_112.tar.bak":
ERR=Permission denied.
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot open
"/export/rclough/psdv-2014-archives-2/run_220.tar.bak":
ERR=Permission denied.
15-Jul 02:03 ripper.red.dsic.com-fd JobId 613: Cannot open
"/export/rclough/psdv-2014-archives-2/scan_114.tar.bak":
ERR=Permission denied.
At first I thought this might be a root-squash problem but, if I
try to read/copy a file using the root user from the Bareos
server that is trying to do the backup, I can read files just fine.
When the job finishes is reports that it finished "OK -- with
warnings" but, again the log for the job is filled with
"ERR=Permission denied" messages. In my opinion, this job did not
finish OK and should be Failed. Some of the files from the
HGluster02 brick are backed up but all of the ones with
permission errors do not. When I restore the job, all of the
files with permission errors are empty.
Has anyone successfully used Bareos to backup data from Gluster
mounts? This is an important use case for us because this is the
largest single volume that we have to prepare large amounts of
data to be archived.
Thank you for your time,
___________________________________________
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
<http://www.decisionsciencescorp.com/><http://www.decisionsciencescorp.com/>
This email and its contents are confidential. If you are not the
intended recipient, please do not disclose or use the information
within this email or its attachments. If you have received this
email in error, please report the error to the sender by return
email and delete this communication from your records.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
http://www.gluster.org/mailman/listinfo/gluster-users
This email and its contents are confidential. If you are not the
intended recipient, please do not disclose or use the information
within this email or its attachments. If you have received this email
in error, please report the error to the sender by return email and
delete this communication from your records.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users