Re: Insane file system overhead on large volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/27/12 1:50 AM, Manny wrote:
> Hi there,
> 
> I'm not sure if this is intended behavior, but I was a bit stumped
> when I formatted a 30TB volume (12x3TB minus 2x3TB for parity in RAID
> 6) with XFS and noticed that there were only 22 TB left. I just called
> mkfs.xfs with default parameters - except for swith and sunit which
> match the RAID setup.
> 
> Is it normal that I lost 8TB just for the file system? That's almost
> 30% of the volume. Should I set the block size higher? Or should I
> increase the number of allocation groups? Would that make a
> difference? Whats the preferred method for handling such large
> volumes?

If it was 12x3TB I imagine you're confusing TB with TiB, so
perhaps your 30T is really only 27TiB to start with.

Anyway, fs metadata should not eat much space:

# mkfs.xfs -dfile,name=fsfile,size=30t
# ls -lh fsfile
-rw-r--r-- 1 root root 30T Jan 27 12:18 fsfile
# mount -o loop fsfile  mnt/
# df -h mnt
Filesystem            Size  Used Avail Use% Mounted on
/tmp/fsfile            30T  5.0M   30T   1% /tmp/mnt

So Christoph's question was a good one; where are you getting
your sizes?

-Eric

> Thanks a lot,
> Manny
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux