Re: xfs metadata overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I see.
So using bulk_stat ioctl in similar way to xfs_fsr and summing allocated size will result with  more accurate number?

Thanks,
Danny

On Fri, Jun 24, 2016 at 4:17 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
On 6/24/16 5:34 AM, Danny Shavit wrote:
>> How do you determine allocated_size, with du?
> yes via du
>> How different?  Can you show an example?
> meta data file size= 5.6 GB (*6,089,374,208*)
>
> *df:*
> Filesystem                   1K-blocks       Used  Available Use% Mounted on
> /dev/dm-39                  2725683200  955900860 1769782340  36% /export/v1     = 978,842,480,640 bytes
>
> *du:*  du -s /export/v1/
> 952825644       /export/v1/     = 975,693,459,456 bytes
> 978,842,480,640 -975,693,459,456  =
>
> Metadata size according to calculation=*3,149,021,184* bytes

Oh, right.  I should have thought of this; "du" counts some metadata as
well, i.e. a directory full of zero-length files still consumes space
which is reported by du.

so your 975,693,459,456 bytes is file data as well as some metadata.

-Eric

>
> Thanks,
> Danny
>
> On Thu, Jun 23, 2016 at 9:12 PM, Eric Sandeen <sandeen@xxxxxxxxxxx <mailto:sandeen@xxxxxxxxxxx>> wrote:
>
>     On 6/23/16 10:04 AM, Danny Shavit wrote:
>     > I see. We will try this direction.
>     > BTW: I thought that good estimate would be "volume_size -
>     > allocated_size - free_space". But it produced quite a difference
>     > compared to metadata dump size.
>     > Is there a specific reason?
>
>     How do you determine allocated_size, with du?
>
>     How different?  Can you show an example?
>
>     -Eric
>
>     > Thanks,
>     > Danny
>     >
>     > On Thu, Jun 23, 2016 at 1:51 AM, Dave Chinner <david@xxxxxxxxxxxxx <mailto:david@xxxxxxxxxxxxx> <mailto:david@xxxxxxxxxxxxx <mailto:david@xxxxxxxxxxxxx>>> wrote:
>     >
>     >     On Wed, Jun 22, 2016 at 06:58:16PM +0300, Danny Shavit wrote:
>     >     > Hi,
>     >     >
>     >     > We are looking for a method to estimate the size of metadata overhead for a
>     >     > given file system.
>     >     > We would like to use this value as indicator for the amount of cache memory
>     >     > a system for faster operation.
>     >     > Are there any counters that are maintained in the on-disk data
>     >     > structures like free space for examples?
>     >
>     >     No.
>     >
>     >     Right now, you'll need to take a metadump of the filesystem to
>     >     measure it. The size of the dump file will be a close indication of
>     >     the amount of metadata in the filesystem as it only contains
>     >     the filesystem metadata.
>     >
>     >     In future, querying the rmap will enable us to calculate it on the
>     >     fly, (i.e. not requiring the filesystem to be snapshotted/taken off
>     >     line to do a metadump).
>     >
>     >     Cheers,
>     >
>     >     Dave.
>     >     --
>     >     Dave Chinner
>     >     david@xxxxxxxxxxxxx <mailto:david@xxxxxxxxxxxxx> <mailto:david@xxxxxxxxxxxxx <mailto:david@xxxxxxxxxxxxx>>
>     >
>     >
>     >
>     >
>     > --
>     > Regards,
>     > Danny
>     >
>     >
>     > _______________________________________________
>     > xfs mailing list
>     > xfs@xxxxxxxxxxx <mailto:xfs@xxxxxxxxxxx>
>     > http://oss.sgi.com/mailman/listinfo/xfs
>     >
>
>     _______________________________________________
>     xfs mailing list
>     xfs@xxxxxxxxxxx <mailto:xfs@xxxxxxxxxxx>
>     http://oss.sgi.com/mailman/listinfo/xfs
>
>
>
>
> --
> Regards,
> Danny



--
Regards,
Danny
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux