In fact the thing I really want to achieve is to be able to find the values and the algorithm that enable me to reproduce the percentage given by df (and to understand deeply what it means). Why do I need it? Because I'm trying to write some script to do capacity planning and space problem forecast. Currently I don't really know which values I should use to do it. (I could use the percentage given by df but it lacks some precisions to make usefull forecasts) 2013/9/17 Nicolas Michel <be.nicolas.michel@xxxxxxxxx>: > OK. Thanks for the journal information. I thought tune2fs -l and > dumpe2fs were the same. In reality it's almost the same but not > entirely ^^ > > I hear you about all the internal mecanisms that make the FS working > or give it some features, and I do understand that it takes some place > on the disk. However what I don't understand is why the number given > in the "available column" is called "available" if it's not really the > case and we have to remove some other thousand/million of bytes for > some internal mecanisms. > > In other words I don't understand why the "used" percentage given by > df does not reflects the values given by itself in the other columns. > > I can live with it but I really would like to understand why things > are what they are. Is there an historic reason? Or maybe a technical > reason that makes thoses numbers some added values? > > The least would be to have the df algorithms documented somewhere? A > document that explains intentions and how the values are obtained. > The same for tune2fs and dumpe2fs (what really means the given numbers?) > > 2013/9/16 Eric Sandeen <sandeen@xxxxxxxxxx>: >> On 9/16/13 9:44 AM, Nicolas Michel wrote: >>> Thanks for you help. I also tried adding some other informations as you suggest: >>> I can also take into account: >>> - "Reserved block count: XXXXXXX" from tune2fs that gives me the >>> number of blocks reserved for root >>> - Reserved GDT blocks: XXX >>> >>> But I didn't thought about the FS journal. How can I gather >>> information about it? (it's size and any other information?) >> >> # dumpe2fs /dev/$YOUR_DEVICE | grep Journal >> dumpe2fs 1.41.12 (17-May-2010) >> Journal inode: 8 >> Journal backup: inode blocks >> Journal features: journal_incompat_revoke >> Journal size: 128M >> Journal length: 32768 >> >> But you also need to take into account inode tables, inode >> allocation bitmaps, block allocation bitmaps ... >> >> -Eric >> >>> 2013/9/16 Eric Sandeen <sandeen@xxxxxxxxxx>: >>>> On 9/16/13 5:16 AM, Nicolas Michel wrote: >>>>> Hello guys, >>>>> >>>>> I have some difficulties to understand what really are the numbers >>>>> behing "df" and tune2fs. You'll find the output of tune2fs and df >>>>> below, on which my maths are based. >>>>> >>>>> Here are my maths: >>>>> >>>>> A tune2fs on an ext3 FS tell me the FS size is 3284992 block large. It >>>>> also tell me that the size of one block is 4096 (bytes if I'm not >>>>> wrong?). So my maths tell me that the disk is 3284992 * 4096 = >>>>> 13455327232 bytes or 13455327232 / 1024 /1024 /1024 = 12.53 GB. >>>>> >>>>> A df --block-size=1 on the same FS tell me the disk is 13243846656 >>>>> which is 211480576 bytes smaller than what tune2fs tell me. >>>> >>>> By default, df on extN assumes that metadata used by the filesystem >>>> was never available for your use and is not part of the filesystem >>>> space. >>>> >>>> Documentation/filesystems/ext3.txt says: >>>> >>>> bsddf (*) Make 'df' act like BSD. >>>> minixdf Make 'df' act like Minix. >>>> >>>> which is pretty unhelpful I suppose. ;) >>>> >>>> The mount man page is a little more helpful: >>>> >>>> bsddf|minixdf >>>> Set the behaviour for the statfs system call. The minixdf >>>> behaviour is to return in the f_blocks field the total number >>>> of blocks of the filesystem, while the bsddf behaviour (which >>>> is the default) is to subtract the overhead blocks used by the >>>> ext2 filesystem and not available for file storage. >>>> >>>> You're seeing the latter behavior. if you mount with -o minixdf you should >>>> see what you expect. (Too bad there's no "linuxdf?") :) >>>> >>>>> In gigabytes, it means: >>>>> * for df, the disk is 12.33 GB >>>>> * for tune2fs, the disk is 12.53 GB >>>>> >>>>> I thought that maybe df is only taking into account the real blocks >>>>> available for users. So I tried to remove the reserved blocks and the >>>>> GDT blocks: >>>>> (3284992 - 164249 - 801) * 4096 = 12779282432 >>>>> or in GB : 12779282432 / 1024 / 1024 / 1024 = 11.90 Gb ... >>>> >>>> you're on the right track, but you forgot the journal space, all the >>>> preallocated inode table blocks, etc. >>>> >>>> -Eric >>>> >>>>> My last thought was that "Reserved block" in tune2fs was not only the >>>>> reserved blocks for root (which is 5% per default on my system) but >>>>> take into account all other reserved blocks fo the fs internal usage. >>>>> So: >>>>> (3284992 - 164249) * 4096 = 12782563328 >>>>> In GB : 11.90 Gb (the difference is not significative with a precision of 2. >>>>> >>>>> So I'm lost ... >>>>> >>>>> Is someone have an explanation? I would really really be grateful. >>>>> Nicolas >>>>> >>>>> ------------------------------ >>>>> --------- >>>>> >>>>> Here is the output of df and tune2fs : >>>>> >>>>> $ tune2fs -l /dev/mapper/datavg-datalogslv >>>>> tune2fs 1.41.9 (22-Aug-2009) >>>>> Filesystem volume name: <none> >>>>> Last mounted on: <not available> >>>>> Filesystem UUID: 4e5bea3e-3e61-4fc8-9676-e5177522911c >>>>> Filesystem magic number: 0xEF53 >>>>> Filesystem revision #: 1 (dynamic) >>>>> Filesystem features: has_journal ext_attr resize_inode dir_index >>>>> filetype needs_recovery sparse_super large_file >>>>> Filesystem flags: unsigned_directory_hash >>>>> Default mount options: (none) >>>>> Filesystem state: clean >>>>> Errors behavior: Continue >>>>> Filesystem OS type: Linux >>>>> Inode count: 822544 >>>>> Block count: 3284992 >>>>> Reserved block count: 164249 >>>>> Free blocks: 3109325 >>>>> Free inodes: 822348 >>>>> First block: 0 >>>>> Block size: 4096 >>>>> Fragment size: 4096 >>>>> Reserved GDT blocks: 801 >>>>> Blocks per group: 32768 >>>>> Fragments per group: 32768 >>>>> Inodes per group: 8144 >>>>> Inode blocks per group: 509 >>>>> Filesystem created: Wed Aug 28 08:30:10 2013 >>>>> Last mount time: Wed Sep 11 17:16:56 2013 >>>>> Last write time: Thu Sep 12 09:38:02 2013 >>>>> Mount count: 18 >>>>> Maximum mount count: 27 >>>>> Last checked: Wed Aug 28 08:30:10 2013 >>>>> Check interval: 15552000 (6 months) >>>>> Next check after: Mon Feb 24 07:30:10 2014 >>>>> Reserved blocks uid: 0 (user root) >>>>> Reserved blocks gid: 0 (group root) >>>>> First inode: 11 >>>>> Inode size: 256 >>>>> Required extra isize: 28 >>>>> Desired extra isize: 28 >>>>> Journal inode: 8 >>>>> Default directory hash: half_md4 >>>>> Directory Hash Seed: ad2251a9-ac33-4e5e-b933-af49cb4f2bb3 >>>>> Journal backup: inode blocks >>>>> >>>>> $ df --block-size=1 /dev/mapper/datavg-datalogslv >>>>> Filesystem 1B-blocks Used Available Use% Mounted on >>>>> /dev/mapper/datavg-datalogslv 13243846656 563843072 12007239680 5% /logs >>>>> >>>>> >>>> >>> >>> >>> >> > > > > -- > Nicolas MICHEL -- Nicolas MICHEL _______________________________________________ Ext3-users mailing list Ext3-users@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/ext3-users