Re: [RFC][PATCH] xfs: adjust size/used/avail information for quota-df

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 21, 2018 at 11:36:08AM +0800, cgxu519@xxxxxxx wrote:
> 在 2018年3月21日,上午3:18,Brian Foster <bfoster@xxxxxxxxxx> 写道:
> > 
> > On Tue, Mar 20, 2018 at 12:41:50PM -0500, Eric Sandeen wrote:
> >> On 3/20/18 9:49 AM, cgxu519@xxxxxxx wrote:
> >> 
> >> ...
> >> 
> >>> No, not really. Assume if we have 100GB xfs filesystem(/mnt/test2) and we have
> >>> 3 directories(pq1, pq2, pq3) inside the fs, each directory sets project quota. 
> >>> (size limit up to 10GB)
> >>> 
> >>> When avail space of total filesystem is only left 3.2MB, but when running df for
> >>> pg1,pg2,pg3 then avail space is 9.5GB, this is much more than real filesystem. 
> >>> What do you think?
> >>> 
> >>> Detail output [1]. (without this fix patch)
> >>> 
> >>> $ df -h /mnt/test2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq1
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq3
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2        10G  570M  9.5G   6% /mnt/test2
> >> 
> >> I agree that this is a confusing result.
> >> 
> > 
> > Ditto. Thanks for the example Chengguang.
> > 
> >>> Detail output [2]. (with this fix patch)
> >>> 
> >>> $ df -h /mnt/test2
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> >>> 
> >>> $ df -h /mnt/test2/pq1
> >>> Filesystem      Size  Used Avail Use% Mounted on
> >>> /dev/vdb2       574M  570M  3.2M 100% /mnt/test2
> >>                   ^           ^
> >>                   |           |
> >>                   |           +-- This makes sense 
> >>                   |
> >>                   +-- This is a little bit odd
> >> 
> >> So you cap the available project space to host filesystem
> >> available space, and also use that to compute the
> >> total size of the "project" by adding used+available.
> >> 
> > 
> > I think I agree here too. Personally, I'd expect the fs size to remain
> > static one way or another (i.e., whether it's the full fs or a sub-fs
> > via project quota) and see the user/avail numbers change based on the
> > current state rather than see the size float around due to just wanting
> > to make the numbers add up. The latter makes it difficult to understand
> > the (virtual) geometry of the project.
> > 
> >> The slightly strange result is that "size" will shrink
> >> as more filesystem space gets used, but I'm not
> >> sure I have a better suggestion here... would the below
> >> result be too confusing?  It is truthful; the limit is 10G,
> >> 570M are used, and only 3.2M is currently available due to
> >> the host filesystem freespace constraint:
> >> 
> >> $ df -h /mnt/test2/pq1
> >> Filesystem      Size  Used Avail Use% Mounted on
> >> /dev/vdb2       10G   570M  3.2M 100% /mnt/test2
> >> 
> > 
> > Slightly confusing, but I'd rather have accuracy than guarantee that
> > size = used + avail. The above at least tells us that something is
> > missing, even if it's not totally obvious that the missing space is
> > unavailable due to the broader fs free space limitation. It's probably
> > the type of thing you'd expect to see if space reporting were truly
> > accurate on a thin volume, for example.
> 
> 
> Personally, I agree with your suggestions, I’m more care about avail/used
> not the size, but unfortunately, statfs seems only collecting f_blocks,
> f_bfree, f_bavail then df calculate used space based on those variables,
> so I think there is no chance directly specify used space. This is the
> reason that I hope to guarantee 'total = used + avail'
> 
> 
> If we remain static size 10GB and avail adjust to 3.2MB, then the result
> looks like below. :(
> 
> $ df -h /mnt/test2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2       100G  100G  3.2M 100% /mnt/test2
> 
> $ df -h /mnt/test2/pq1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vdb2        10G   10G  3.2M 100% /mnt/test2
> 

Ah, I see. So used becomes inaccurate at that point. Hmm, it's starting
to seem to me that maybe leaving this as is is the right approach. The
output above is misleading because that much space has not been used by
the quota. As previously noted, the floating size approach I personally
just find confusing. It's not really clear at all what it's telling me
as a user.

The current approach directly maps the quota state to the stats fields
so it clearly tells me 1.) the limit and 2.) how much of the limit I've
used. If the parent filesystem is more restrictive and operations result
in ENOSPC, then that's something the admin will have to resolve one way
or another.

That's just my .02. Perhaps others feel differently and/or have better
logic.

> 
> 
> > 
> > FWIW, the other option is just to leave the output as above where we
> > presumably ignore the global free space cap and present 9.5GB available.
> > I think it's fine to fix/limit that, but I'd prefer an inaccurate
> > available number to an inaccurate/variable fs size either way.
> > 
> > With regard to a soft limit, it looks like we currently size the fs at
> > the soft limit and simply call it 100% used if the limit is exceeded.
> > That seems reasonable to me if only a soft limit is set, but I suppose
> > that could hide some info if both hard/soft limits are set. Perhaps we
> > should use the max of the soft/hard limit if both are set (or I guess
> > prioritize a hard limit iff it's larger than the soft, to avoid
> > insanity)? I suppose one could also argue that some admins might want to
> > size an fs with the soft limit, give users a bit of landing room, then
> > set a hard cap to protect the broader fs. :/
> 
> If we want to remain the size static then we need choose either soft or hard limit.
> I think hard limit is a little bit better and meaningful because soft limit
> might not directly cause write error even if the limit has already exceeded.
> 

It seems reasonable enough to me to always use the hardlimit when both a
hard and soft limit are set, but I don't really have a strong opinion
either way.

Brian

> 
> 
> > 
> > Brian
> > 
> >> -Eric
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux