Re: UBIFS quota support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 30, 2019 at 01:45:35PM +0100, Jan Kara wrote:
> Hello,
> 
> > UBIFS has no idea of transactions and works a little different from
> > ext4. In UBIFS all filesystem data and metadata are stored in the leaf
> > nodes of a b+tree. These leaf nodes are constantly written to flash and
> > would be enough to reconstruct the FS. From time to time UBIFS does a
> > commit and writes the index nodes to flash. During recovery the tree can
> > be read from the index nodes from the last commit. The remaining leaf
> > nodes that were written after the last commit are then scanned and added
> > to the tree during replay.
> > 
> > Quota seems to work in the way that it has callbacks into the FS to read
> > and update dqblks. This is not very suitable for UBIFS. Instead it would
> 
> Yes, generally it works by requesting loading / storing of quota
> information for a particular user from the filesystem.
> 
> > be nicer to read the full quota data from flash and hand it over to
> > quota. When UBIFS does a commit it would then request a consistent view
> > of the quota data and write it back to flash. During replay of an
> > uncleanly mounted FS UBIFS could read the quota data from the last
> > commit and update it with the remaining leaf nodes that need to be
> > replayed anyway.
> 
> Well, I don't think writing all quota data for each commit is a good
> design. That will write out lot of unnecessary stuff that didn't change
> since last time. It is like if you rewrite the whole file to update
> one block in it... Similarly loading all quota data doesn't look great for
> performance. There can be thousands or tens of thousands different users of
> the filesystem (yes, I've seen such deployments) and you don't want to read
> and keep in memory information you don't currently need. But I guess UBIFS
> is targetted at smaller deployments?

It seems our perspective to quota is quite different ;)
UBIFS targets embedded systems, in a typical usecase I wouldn't expect
more than a handful of users and certainly not more users than we fit
dqblks in a single page. From my perspective it adds overhead that we
have a tree at all and that the current quota formats constantly changes
the single page that we have all our quota data in, instead of writing
the whole data at once every once in a while.

> 
> Anyway, what is easily doable is that you would just ignore requests for
> update from quota subsystem (just let quota subsystem mark dquot as dirty)
> and then on commit call dquot_writeback_dquots() that will call
> ->write_dquot callback for each dquot that is dirty. That way you'd get
> full dump of dquots that changed since last commit. You'd need to somehow
> make sure this gets merged with information from previous commit. Then on
> crash recovery you'd load quota information from commit and update it with
> the changes that happened since that last commit.

Yes, something like that should work.

> 
> Honestly loading quota limits by init scripts looks like a hack to me. Note
> that quota limits rarely change so you can easily store them separately
> from usage information and load them on mount. Since setting of limits does
> not have to be crash-safe (well, it needs to keep the quota information in
> a state that is recoverable by mount but it doesn't need to coordinate with
> any other filesystem operations), I don't think implementing that would be
> hard...

It's not my goal to put quota limits into init scripts, still I think it
could be a nice stopover to divide the whole thing into more managable
parts. I think I'll just implement it up to that point and post the
result. Then we can still see if it's enough to merge or if I have to
implement the rest before merging.

Sascha

-- 
Pengutronix e.K.                           |                             |
Industrial Linux Solutions                 | http://www.pengutronix.de/  |
Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0    |
Amtsgericht Hildesheim, HRA 2686           | Fax:   +49-5121-206917-5555 |



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux