Re: Does your application depend on, or report, free disk space? Re: F20 Self Contained Change: OS Installer Support for LVM Thin Provisioning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27.07.2013 05:07, Chris Murphy wrote:

On Jul 26, 2013, at 4:53 PM, Pádraig Brady <P@xxxxxxxxxxxxxx> wrote:

On 07/26/2013 09:13 PM, Miloslav Trmač wrote:
Hello all,
with thin provisioning available, the total and free space values
reported by a filesystem do not necessarily mean that that much space
is _actually_ available (the actual backing storage may be smaller, or
shared with other filesystems).

If your package reports disk space usage to users, and bases this on
filesystem free space, please consider whether it might need to take
LVM thin provisioning into account.

The same applies if your package automatically allocates a certain
proportion of the total or available space.

A quick way to check whether your package is likely to be affected, is
to look for statfs() or statvfs() calls in C, or the equivalent in
your higher-level library / programming language.

Anything df(1) should do here?

Example: Creating a btrfs raid1 volume from two 2TB drives, df shows it as having 4TB available:

# parted -l

Error: /dev/sdb: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 2199GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

Error: /dev/sdc: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 2199GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

# mkfs.btrfs -d raid1 -m raid1 /dev/sd[bc]

WARNING! - Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/sdc id 2
fs created label (null) on /dev/sdb
	nodesize 4096 leafsize 4096 sectorsize 4096 size 4.00TB
Btrfs v0.20-rc1

# mount /dev/sdb /mnt
#  df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        79G  4.2G   71G   6% /
devtmpfs        1.5G     0  1.5G   0% /dev
tmpfs           1.5G     0  1.5G   0% /dev/shm
tmpfs           1.5G  680K  1.5G   1% /run
tmpfs           1.5G     0  1.5G   0% /sys/fs/cgroup
tmpfs           1.5G  4.0K  1.5G   1% /tmp
none            224G   87G  138G  39% /media/sf_chris
/dev/sdb        4.0T   56K  4.0T   1% /mnt


The explanation is that the file system isn't raid1, but rather the allocated chunks have this attribute. Presently a volume only allocates with one profile, but the future plan is per subvolume and even per file raid profiles. So establishing how much free space there is on a btrfs volume is absolutely less than clear.

Anyway, I think it will cause some confusion if by "available" an application thinks it can write out more than 2TB of data to this example volume.

I thought one of the features of combining the block layer and filesystem layer like btrfs does is that the filesystem can actually know the state/topology of the block layer and work more efficiently. Combined with the already existing problem of getting out of diskspace errors long before use hits 100% (has this been fixed since?) this makes any sort of capacity planning difficult if not impossible.

Regards,
  Dennis

--
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux