Re: 'error=No space left on device' but, there is plenty of space all nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Strahil and Gluster users,

 

Yes I had checked but, checked again and only 1% inode usage.  99% free.   Same every node.  

 

Example:

[root@nybaknode1 ]# df -i /lvbackups/brick

Filesystem                          Inodes IUsed      IFree IUse% Mounted on

/dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742    1% /lvbackups

[root@nybaknode1 ]#

 

I neglected to clarify in original post this issue is actually being seen through nfs-ganesha remote client mounts to gluster.  Its realized after ~12-24 hours of backups uploading over nfs last couple weekends.  

 

If I reboot all the gluster nodes the backup jobs, nfs mounts, etc are able to recover and resume the backup cycle so it seems to be a no disk space error state that clears on reboots.  This is the workaround we are having to do for now.

 

We are using nfs-ganesha-3.5-3.el8.x86_64 all gluster nodes currently.   

 

For some servers we upload backups ftp to vsftpd+gluster and other servers we use nfsv4 clients mounting remotely to upload backup over nfs to nfs-ganesha+gluster.

 

I’m planning to do another backup cycle of tests, re-review results and compare our ftp vs nfs clients more. 

 

Probably I will file a github issue report as someone else advised in another reply.

 

Thanks,

 

Brandon

 

 

From: Strahil Nikolov <hunter86_bg@xxxxxxxxx>
Sent: Thursday, May 4, 2023 9:54 AM
To: brandon@xxxxxxxxxxxxx; gluster-users@xxxxxxxxxxx
Subject: Re: 'error=No space left on device' but, there is plenty of space all nodes

 

Hi,

Have you checked inode usage (df -i /lvbackups/brick ) ?



Best Regards,

Strahil Nikolov

 

On Tuesday, May 2, 2023, 3:05 AM, brandon@xxxxxxxxxxxxx wrote:

Hi Gluster users,

 

We are seeing 'error=No space left on device' issue and hoping someone might

could advise?

 

We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for

years (not new) and recently 2 weeks ago upgraded to v9 > v10.4.  I do not

know if the upgrade is related to this new issue.

 

We are seeing a new issue 'error=No space left on device' error below on

multiple gluster v10.4 nodes in the logs.  At the moment seeing it in the

logs for about half (5 out of 12) of the nodes.  The issue will go away if

we reboot all the glusterfs nodes but, backups take a little over 2 days to

complete each weekend and the issue returns after about 1 day of backups

running and before the backup cycle is complete. It has been happening the

last 2 weekends we have run backups to these nodes.

 

#example log msg from /var/log/glusterfs/home-volbackups.log

[2023-05-01 21:43:15.450502 +0000] W [MSGID: 114031]

[client-rpc-fops_v2.c:670:client4_0_writev_cbk] 0-volbackups-client-18:

remote operation failed. [{errno=28}, {error=No space left on device}]

 

Each glusterfs node has a single brick and mounts a single distributed

volume as a glusterfs client locally and receives backup files to the volume

each weekend. 

 

We distribute the ftp upload load between the servers through a combination

of /etc/hosts entries and AWS weighted dns.

 

We have 91 TB available on the volume though and each of the 12 nodes have

4-11 TB free so we are nowhere near out of space on any node?

 

We have already tried the setting change from 'cluster.min-free-disk: 1%' to

'cluster.min-free-disk: 1GB' and rebooted all the gluster nodes to refresh

them and it happened again.  That was mentioned in this doc

 

Does anyone know what we might check next?

 

glusterfs-server-10.4-1.el8s.x86_64

glusterfs-fuse-10.4-1.el8s.x86_64

 

Here is the info (hostnames changed) below.

 

[root@nybaknode1 ~]# gluster volume status volbackups detail

Status of volume: volbackups

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode9.example.net:/lvbackups/brick

TCP Port            : 60039

RDMA Port            : 0

Online              : Y

Pid                  : 1664

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot

a

Inode Size          : 512

Disk Space Free      : 6.1TB

Total Disk Space    : 29.0TB

Inode Count          : 3108974976

Free Inodes          : 3108881513

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode11.example.net:/lvbackups/brick

TCP Port            : 52682

RDMA Port            : 0

Online              : Y

Pid                  : 2076

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot

a

Inode Size          : 512

Disk Space Free      : 10.1TB

Total Disk Space    : 43.5TB

Inode Count          : 4672138432

Free Inodes          : 4672039743

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode2.example.net:/lvbackups/brick

TCP Port            : 56722

RDMA Port            : 0

Online              : Y

Pid                  : 1761

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot

a

Inode Size          : 512

Disk Space Free      : 6.6TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108827241

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode3.example.net:/lvbackups/brick

TCP Port            : 53098

RDMA Port            : 0

Online              : Y

Pid                  : 1601

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquo

ta

Inode Size          : 512

Disk Space Free      : 6.4TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108827312

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode4.example.net:/lvbackups/brick

TCP Port            : 51476

RDMA Port            : 0

Online              : Y

Pid                  : 1633

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquo

ta

Inode Size          : 512

Disk Space Free      : 6.9TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108826837

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode12.example.net:/lvbackups/brick

TCP Port            : 50224

RDMA Port            : 0

Online              : Y

Pid                  : 1966

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot

a

Inode Size          : 512

Disk Space Free      : 9.9TB

Total Disk Space    : 43.5TB

Inode Count          : 4671718976

Free Inodes          : 4671620385

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode5.example.net:/lvbackups/brick

TCP Port            : 55270

RDMA Port            : 0

Online              : Y

Pid                  : 1666

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquo

ta

Inode Size          : 512

Disk Space Free      : 7.4TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108827575

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode6.example.net:/lvbackups/brick

TCP Port            : 53106

RDMA Port            : 0

Online              : Y

Pid                  : 1688

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota

Inode Size          : 512

Disk Space Free      : 7.6TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108827668

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode7.example.net:/lvbackups/brick

TCP Port            : 56734

RDMA Port            : 0

Online              : Y

Pid                  : 1655

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquo

ta

Inode Size          : 512

Disk Space Free      : 4.0TB

Total Disk Space    : 14.4TB

Inode Count          : 1546333376

Free Inodes          : 1546245572

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode8.example.net:/lvbackups/brick

TCP Port            : 60208

RDMA Port            : 0

Online              : Y

Pid                  : 1754

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquot

a

Inode Size          : 512

Disk Space Free      : 7.0TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108827378

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode10.example.net:/lvbackups/brick

TCP Port            : 53237

RDMA Port            : 0

Online              : Y

Pid                  : 1757

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot

a

Inode Size          : 512

Disk Space Free      : 10.5TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108828289

----------------------------------------------------------------------------

--

Brick                : Brick nybaknode1.example.net:/lvbackups/brick

TCP Port            : 54446

RDMA Port            : 0

Online              : Y

Pid                  : 1685

File System          : xfs

Device              : /dev/mapper/vgbackups-lvbackups

Mount Options        :

rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=128,swidth=128,noquot

a

Inode Size          : 512

Disk Space Free      : 11.2TB

Total Disk Space    : 29.0TB

Inode Count          : 3108921344

Free Inodes          : 3108828701

 

[root@nybaknode1 ~]#

 

[root@nybaknode1 ~]# gluster volume status volbackups

Status of volume: volbackups

Gluster process                            TCP Port  RDMA Port  Online  Pid

----------------------------------------------------------------------------

--

Brick nybaknode9.example.net:/lvbackups/b

rick                                        60039    0          Y

1664

Brick nybaknode11.example.net:/lvbackups/

brick                                      52682    0          Y

2076

Brick nybaknode2.example.net:/lvbackups/b

rick                                        56722    0          Y

1761

Brick nybaknode3.example.net:/lvbackups/b

rick                                        53098    0          Y

1601

Brick nybaknode4.example.net:/lvbackups/b

rick                                        51476    0          Y

1633

Brick nybaknode12.example.net:/lvbackups/

brick                                      50224    0          Y

1966

Brick nybaknode5.example.net:/lvbackups/b

rick                                        55270    0          Y

1666

Brick nybaknode6.example.net:/lvbackups/b

rick                                        53106    0          Y

1688

Brick nybaknode7.example.net:/lvbackups/b

rick                                        56734    0          Y

1655

Brick nybaknode8.example.net:/lvbackups/b

rick                                        60208    0          Y

1754

Brick nybaknode10.example.net:/lvbackups/

brick                                      53237    0          Y

1757

Brick nybaknode1.example.net:/lvbackups/b

rick                                        54446    0          Y

1685

 

Task Status of Volume volbackups

----------------------------------------------------------------------------

--

Task                : Rebalance

ID                  : df687907-fee4-4a46-9d23-2cfb38cb17cd

Status              : completed

 

[root@nybaknode1 ~]#

 

[root@nybaknode1 ~]# gluster volume info volbackups

 

Volume Name: volbackups

Type: Distribute

Volume ID: cd40794d-ab74-4706-a0bc-3e95bb8c63a2

Status: Started

Snapshot Count: 0

Number of Bricks: 12

Transport-type: tcp

Bricks:

Brick1: nybaknode9.example.net:/lvbackups/brick

Brick2: nybaknode11.example.net:/lvbackups/brick

Brick3: nybaknode2.example.net:/lvbackups/brick

Brick4: nybaknode3.example.net:/lvbackups/brick

Brick5: nybaknode4.example.net:/lvbackups/brick

Brick6: nybaknode12.example.net:/lvbackups/brick

Brick7: nybaknode5.example.net:/lvbackups/brick

Brick8: nybaknode6.example.net:/lvbackups/brick

Brick9: nybaknode7.example.net:/lvbackups/brick

Brick10: nybaknode8.example.net:/lvbackups/brick

Brick11: nybaknode10.example.net:/lvbackups/brick

Brick12: nybaknode1.example.net:/lvbackups/brick

Options Reconfigured:

cluster.min-free-disk: 1GB

nfs.disable: on

transport.address-family: inet

performance.cache-max-file-size: 2MB

diagnostics.brick-log-level: WARNING

diagnostics.brick-sys-log-level: WARNING

client.event-threads: 16

performance.client-io-threads: on

performance.io-thread-count: 32

server.event-threads: 16

performance.cache-size: 256MB

[root@nybaknode1 ~]#

 

 

________

 

 

 

Community Meeting Calendar:

 

Schedule -

Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC

Gluster-users mailing list

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux