Re: Issue with "no space left on device"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 17-08-17 23:28, Eric Sandeen wrote:
On 8/17/17 4:21 PM, Eric Sandeen wrote:
On 8/17/17 3:11 PM, Sander van Schie wrote:

On 17-08-17 19:45, Eric Sandeen wrote:

...

recent xfsprogs v4.12 has a new option to freesp, to specify alignment filters,
i.e.

      xfs_db> freesp -A 32
           will show only 32-block aligned free extents.

You may not have that recent of xfsprogs, but you could check out the git
tree from git://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git, build
it, and run xfs_db from within the tree, i.e.

# db/xfs_db -c "freesp -A 32 -s" /dev/vdc1

Can you provide that output?

-Eric


Thank you for the explanation!

The output of the command is as follows:

# db/xfs_db -c "freesp -A 32 -s" /dev/vdc1
    from      to extents  blocks    pct
       1       1       1       1   0,00
       2       3       7      21   0,01
       4       7     235     970   0,24
       8      15     130    1313   0,33
      16      31   14214  397375  99,42

So, I think that's the problem: There are no 32-block aligned
free regions of 32 blocks length or greater.

(now that I think about it, the -A filter filters on free extents
/starting/ on that alignment; I don't know if the inode allocator
can make use of, say, a 64 block free extent which /overlaps/ an
aligned 32-block range... hm)

Out of curiosity, what was the reason for 2k inodes?

The filesystem was created by a default Ceph deployement, so no particular reason. An inode size of 2k seems to be the default of Ceph(-deploy) to prevent performance issues due to, as I understand it, metadata otherwise possibly not fitting in a single extent.

Is this currently primarily an issue due to the fairly small partition size and big inode size? Will this be less of an issue with a partition of let's say 1 TB?

I'll do some more research as it's not very clear for me yet (due to my lack of experience regarding XFS, or filesystems in general). Your information was very insightful though, so thank you for that!


Also for what it's worth - the sparse inodes feature, which is
default on newer filesystems, alleviates this problem.  When
mounted, what does xfs_info /mount/point say, does it contain
output for "spinodes?"

Currently it's set to 0: spinodes=0


If you need 2k inodes, you probaboy want to get userspace+kernel
that can support sparse inode allocati
I will look into this


-Eric



--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux