Re: [Gluster-user] Sybase backup server failed to write to Gluster NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Soumya,

I was able to mount the same volume on other NFS client and do writes

got the following nfs.log entries when write




[2015-01-22 17:39:03.528405] I [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-sas02-replicate-1:  metadata self heal  is successfully completed,   metadata self heal from source sas02-client-2 to sas02-client-3,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /RepDBSata02
[2015-01-22 17:39:03.529407] I [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status] 0-sas02-replicate-2:  metadata self heal  is successfully completed,   metadata self heal from source sas02-client-4 to sas02-client-5,  metadata - Pending matrix:  [ [ 0 0 ] [ 0 0 ] ], on /RepDBSata02


Thanks
Peter
________________________________________
From: Soumya Koduri [skoduri@xxxxxxxxxx]
Sent: Wednesday, January 21, 2015 9:05 PM
To: Peter Auyeung; gluster-devel@xxxxxxxxxxx; gluster-users@xxxxxxxxxxx
Subject: Re:  [Gluster-user] Sybase backup server failed to write to Gluster NFS

Hi Peter,

Can you please try manually mounting those volumes using any/other nfs
client and check if you are able to perform write operations. Also
please collect the gluster nfs log while doing so.

Thanks,
Soumya

On 01/22/2015 08:18 AM, Peter Auyeung wrote:
> Hi,
>
> We have been having 5 sybase servers doing dump/export to Gluster NFS
> for couple months and yesterday it started to give us these error on not
> able to write files
>
> The gluster NFS export is not full and we can still move and write files
> as sybase unix user from the sybase servers.
>
> There are no error logs on gluster nfs nor the bricks and etc-glusterfs
> logs and no nfs client error on the sybase servers neither.
>
> The NFS export was a replica 2 volume (3x2)
>
> I created another NFS export from same gluster but a distributed only
> volume and still giving out the same error.
>
> Any Clue?
>
> Thanks
> Peter
>
> Jan 20 20:04:17 2015: Backup Server: 6.53.1.1: OPERATOR: Volume on
> device '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.e'
> cannot be opened for write access. Mount another volume.
> Jan 20 20:04:17 2015: Backup Server: 6.78.1.1: EXECUTE sp_volchanged
>          @session_id = 87,
>          @devname =
> '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.e',
>          @action = { 'PROCEED' | 'RETRY' | 'ABORT' }
> Jan 20 20:04:26 2015: Backup Server: 6.53.1.1: OPERATOR: Volume on
> device '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.a'
> cannot be opened for write access. Mount another volume.
> Jan 20 20:04:26 2015: Backup Server: 6.78.1.1: EXECUTE sp_volchanged
>          @session_id = 87,
>          @devname =
> '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.a',
>          @action = { 'PROCEED' | 'RETRY' | 'ABORT' }
> Jan 20 20:05:41 2015: Backup Server: 6.53.1.1: OPERATOR: Volume on
> device '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.d'
> cannot be opened for write access. Mount another volume.
> Jan 20 20:05:41 2015: Backup Server: 6.78.1.1: EXECUTE sp_volchanged
>          @session_id = 87,
>          @devname =
> '/dbbackup01/db/full/pr_rssd_id_repsrv_rssd.F01-20-20-04.d',
>          @action = { 'PROCEED' | 'RETRY' | 'ABORT' }
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux