On 02/06/2014 02:59 PM, Justin Dossey wrote: > An hour of googling didn't turn up the answer, so I'll ask here: do you > know about when the linux kernel changed to enable inode64 by default? I > haven't run into the issue Pat had, but I don't want to! > It looks like the following commit: 08bf5404 xfs: make inode64 as the default allocation mode ... which is first included in Linux 3.7. > I wish inode64 use were reported by xfs_info or something. > I think xfs_info is more for the geometry of the filesystem. inode64 is a mount (runtime) option. Have you checked the mount options listed for your active mounts (i.e., 'mount')? Brian > > On Thu, Feb 6, 2014 at 10:57 AM, Brian Foster <bfoster@xxxxxxxxxx> wrote: > >> On 02/06/2014 11:25 AM, Pat Haley wrote: >>> >>> Hi Brian, >>> >>> gluster-0-1 did not recognize the delaylog option, >>> but when I mounted the disk with nobarrier,inode64 >>> I was able to write to the disk both directly >>> and from a client through gluster. >>> >>> Assuming inode64 was the key, was the problem >>> that XFS could not address the inodes withour >>> 64 bit representation? Just curious. >>> >>> Problem solved! Thanks! >>> >> >> Oh, ok. That's good to hear. inode64 tends to slip my mind because it's >> fairly standard at this point. It's enabled by default on more recent >> kernels. >> >> Effectively, the problem is as you describe. Without inode64 support, >> inodes are restricted to the first 1TB of fs space. With inode64 >> enabled, the fs can allocate new inode chunks throughout the disk, so >> long as there is enough contiguous space. See the following: >> >> http://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_inode64_mount_option_for.3F >> >> FWIW, I wouldn't continue using the nobarrier option unless you know >> what you're doing and you can safely run with barriers disabled (see the >> same FAQ linked above for information on barrier support). It is >> completely unrelated to this particular problem. >> >> Brian >> >>> Pat >>> >>> >>>> On 02/05/2014 03:39 PM, Pat Haley wrote: >>>>> Hi Brian, >>>>> >>>>> I tried both just using touch to create >>>>> an empty file and copying a small (<1kb) >>>>> file. Neither worked. >>>>> >>>>> Note: currently the disk served by gluster-0-1 >>>>> is mounted as >>>>> >>>>> /dev/sdb1 /mseas-data-0-1 xfs >>>>> defaults 1 0 >>>>> >>>>> I have received some advice to change the mount >>>>> to nobarrier,inode64,delaylog >>>>> Would this be compatible with gluster? >>>>> >>>> >>>> That suggests XFS cannot allocate more inodes. inode64 is worth a try if >>>> that is not currently enabled. I don't think it should make a difference >>>> for gluster. The other options are unrelated and should have no effect. >>>> >>>> Brian >>>> >>>>> Pat >>>>> >>>>> >>>>>> On 02/04/2014 02:14 PM, Jeff Darcy wrote: >>>>>>>> I tried to "go behind" gluster and directly >>>>>>>> write a file to the nfs filesystem on gluster-0-1. >>>>>>>> >>>>>>>> If I try to write to /mseas-data-0-1 (the file >>>>>>>> space served by gluster-0-1) directly I still >>>>>>>> get the "No space left on device" error. >>>>>>>> (df -h still shows 784G on that disk) >>>>>>>> >>>>>> Are you writing to an existing file or attempting to create a new one? >>>>>> Can you simply create a new, empty file on your backend (i.e., touch >>>>>> mynewfile)? >>>>>> >>>>>> Brian >>>>>> >>>>>>>> If I try to write to the system disk >>>>>>>> (the only other area) there is no problem. >>>>>>>> >>>>>>>> I don't have any portion of the disk served by >>>>>>>> gluster-0-1 that is not under gluster, so I >>>>>>>> can't try to write to a non-gluster portion of >>>>>>>> the disk. >>>>>>>> >>>>>>>> Does this suggest anything? >>>>>>> Quota limit? >>>>>>> _______________________________________________ >>>>>>> Gluster-users mailing list >>>>>>> Gluster-users@xxxxxxxxxxx >>>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>>>>> >>>>> >>>> >>> >>> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@xxxxxxxxxxx >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users