Re: [RFC PATCH] Introduce generalized data temperature estimation framework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 27, 2025 at 9:59 PM Viacheslav Dubeyko
<Slava.Dubeyko@xxxxxxx> wrote:
>
> On Mon, 2025-01-27 at 15:19 +0100, Hans Holmberg wrote:
> > On Fri, Jan 24, 2025 at 10:03 PM Viacheslav Dubeyko
> > <Slava.Dubeyko@xxxxxxx> wrote:
> > >
> > > On Fri, 2025-01-24 at 08:19 +0000, Johannes Thumshirn wrote:
> > > > On 23.01.25 21:30, Viacheslav Dubeyko wrote:
> > > > > [PROBLEM DECLARATION]
> > > > > Efficient data placement policy is a Holy Grail for data
> > > > > storage and file system engineers. Achieving this goal is
> > > > > equally important and really hard. Multiple data storage
> > > > > and file system technologies have been invented to manage
> > > > > the data placement policy (for example, COW, ZNS, FDP, etc).
> > > > > But these technologies still require the hints related to
> > > > > nature of data from application side.
> > > > >
> > > > > [DATA "TEMPERATURE" CONCEPT]
> > > > > One of the widely used and intuitively clear idea of data
> > > > > nature definition is data "temperature" (cold, warm,
> > > > > hot data). However, data "temperature" is as intuitively
> > > > > sound as illusive definition of data nature. Generally
> > > > > speaking, thermodynamics defines temperature as a way
> > > > > to estimate the average kinetic energy of vibrating
> > > > > atoms in a substance. But we cannot see a direct analogy
> > > > > between data "temperature" and temperature in physics
> > > > > because data is not something that has kinetic energy.
> > > > >
> > > > > [WHAT IS GENERALIZED DATA "TEMPERATURE" ESTIMATION]
> > > > > We usually imply that if some data is updated more
> > > > > frequently, then such data is more hot than other one.
> > > > > But, it is possible to see several problems here:
> > > > > (1) How can we estimate the data "hotness" in
> > > > > quantitative way? (2) We can state that data is "hot"
> > > > > after some number of updates. It means that this
> > > > > definition implies state of the data in the past.
> > > > > Will this data continue to be "hot" in the future?
> > > > > Generally speaking, the crucial problem is how to define
> > > > > the data nature or data "temperature" in the future.
> > > > > Because, this knowledge is the fundamental basis for
> > > > > elaboration an efficient data placement policy.
> > > > > Generalized data "temperature" estimation framework
> > > > > suggests the way to define a future state of the data
> > > > > and the basis for quantitative measurement of data
> > > > > "temperature".
> > > > >
> > > > > [ARCHITECTURE OF FRAMEWORK]
> > > > > Usually, file system has a page cache for every inode. And
> > > > > initially memory pages become dirty in page cache. Finally,
> > > > > dirty pages will be sent to storage device. Technically
> > > > > speaking, the number of dirty pages in a particular page
> > > > > cache is the quantitative measurement of current "hotness"
> > > > > of a file. But number of dirty pages is still not stable
> > > > > basis for quantitative measurement of data "temperature".
> > > > > It is possible to suggest of using the total number of
> > > > > logical blocks in a file as a unit of one degree of data
> > > > > "temperature". As a result, if the whole file was updated
> > > > > several times, then "temperature" of the file has been
> > > > > increased for several degrees. And if the file is under
> > > > > continous updates, then the file "temperature" is growing.
> > > > >
> > > > > We need to keep not only current number of dirty pages,
> > > > > but also the number of updated pages in the near past
> > > > > for accumulating the total "temperature" of a file.
> > > > > Generally speaking, total number of updated pages in the
> > > > > nearest past defines the aggregated "temperature" of file.
> > > > > And number of dirty pages defines the delta of
> > > > > "temperature" growth for current update operation.
> > > > > This approach defines the mechanism of "temperature" growth.
> > > > >
> > > > > But if we have no more updates for the file, then
> > > > > "temperature" needs to decrease. Starting and ending
> > > > > timestamps of update operation can work as a basis for
> > > > > decreasing "temperature" of a file. If we know the number
> > > > > of updated logical blocks of the file, then we can divide
> > > > > the duration of update operation on number of updated
> > > > > logical blocks. As a result, this is the way to define
> > > > > a time duration per one logical block. By means of
> > > > > multiplying this value (time duration per one logical
> > > > > block) on total number of logical blocks in file, we
> > > > > can calculate the time duration of "temperature"
> > > > > decreasing for one degree. Finally, the operation of
> > > > > division the time range (between end of last update
> > > > > operation and begin of new update operation) on
> > > > > the time duration of "temperature" decreasing for
> > > > > one degree provides the way to define how many
> > > > > degrees should be subtracted from current "temperature"
> > > > > of the file.
> > > > >
> > > > > [HOW TO USE THE APPROACH]
> > > > > The lifetime of data "temperature" value for a file
> > > > > can be explained by steps: (1) iget() method sets
> > > > > the data "temperature" object; (2) folio_account_dirtied()
> > > > > method accounts the number of dirty memory pages and
> > > > > tries to estimate the current temperature of the file;
> > > > > (3) folio_clear_dirty_for_io() decrease number of dirty
> > > > > memory pages and increases number of updated pages;
> > > > > (4) folio_account_dirtied() also decreases file's
> > > > > "temperature" if updates hasn't happened some time;
> > > > > (5) file system can get file's temperature and
> > > > > to share the hint with block layer; (6) inode
> > > > > eviction method removes and free the data "temperature"
> > > > > object.
> > > >
> > > > I don't want to pour gasoline on old flame wars, but what is the
> > > > advantage of this auto-magic data temperature framework vs the existing
> > > > framework?
> > > >
> > >
> > > There is no magic in this framework. :) It's simple and compact framework.
> > >
> > > >  'enum rw_hint' has temperature in the range of none, short,
> > > > medium, long and extreme (what ever that means), can be set by an
> > > > application via an fcntl() and is plumbed down all the way to the bio
> > > > level by most FSes that care.
> > >
> > > I see your point. But the 'enum rw_hint' defines qualitative grades again:
> > >
> > > enum rw_hint {
> > >         WRITE_LIFE_NOT_SET      = RWH_WRITE_LIFE_NOT_SET,
> > >         WRITE_LIFE_NONE         = RWH_WRITE_LIFE_NONE,
> > >         WRITE_LIFE_SHORT        = RWH_WRITE_LIFE_SHORT,  <-- HOT data
> > >         WRITE_LIFE_MEDIUM       = RWH_WRITE_LIFE_MEDIUM, <-- WARM data
> > >         WRITE_LIFE_LONG         = RWH_WRITE_LIFE_LONG,   <-- COLD data
> > >         WRITE_LIFE_EXTREME      = RWH_WRITE_LIFE_EXTREME,
> > > } __packed;
> > >
> > > First of all, again, it's hard to compare the hotness of different files
> > > on such qualitative basis. Secondly, who decides what is hotness of a particular
> > > data? People can only guess or assume the nature of data based on
> > > experience in the past. But workloads are changing and evolving
> > > continuously and in real-time manner. Technically speaking, application can
> > > try to estimate the hotness of data, but, again, file system can receive
> > > requests from multiple threads and multiple applications. So, application
> > > can guess about real nature of data too. Especially, nobody would like
> > > to implement dedicated logic in application for data hotness estimation.
> > >
> > > This framework is inode based and it tries to estimate file's
> > > "temperature" on quantitative basis. Advantages of this framework:
> > > (1) we don't need to guess about data hotness, temperature will be
> > > calculated quantitatively; (2) quantitative basis gives opportunity
> > > for fair comparison of different files' temperature; (3) file's temperature
> > > will change with workload(s) changing in real-time; (4) file's
> > > temperature will be correctly accounted under the load from multiple
> > > applications. I believe these are advantages of the suggested framework.
> > >
> >
> > While I think the general idea(using file-overwrite-rates as a
> > parameter when doing data placement) could be useful, it could not
> > replace the user space hinting we already have.
> >
> > Applications(e.g. RocksDB) doing sequential writes to files that are
> > immutable until deleted(no overwrites) would not benefit. We need user
> > space help to estimate data lifetime for those workloads and the
> > relative write lifetime hints are useful for that.
> >
>
> I don't see any competition or conflict here. Suggested approach and user-space
> hinting could be complementary techniques. If user-space logic would like to use
> a special data placement policy, then it can share hints in its own way. But,
> potentially, suggested approach of temperature calculation can be used to check
> the effectiveness of the user-space hinting, and, maybe, correcting it. So, I
> don't see any conflict here.

I don't see a conflict here either, my point is just that this
framework cannot replace the user hints.

>
> > So what I am asking myself is if this framework is added, who would
> > benefit? Without any benchmark results it's a bit hard to tell :)
> >
>
> Which benefits would you like to see? I assume we would like: (1) prolong device
> lifetime, (2) improve performance, (3) decrease GC burden. Do you mean these
> benefits?

Yep, decreased write amplification essentially.

>
> As far as I can see, different file systems can use temperature in different
> way. And this is slightly complicates the benchmarking. So, how can we define
> the effectiveness here and how can we measure it? Do you have a vision here? I
> am happy to make more benchmarking.
>
> My point is that the calculated file's temperature gives the quantitative way to
> distribute even user data among several temperature groups ("baskets"). And
> these baskets/segments/anything-else gives the way to properly group data. File
> systems can employ the temperature in various ways, but it can definitely helps
> to elaborate proper data placement policy. As a result, GC burden can be
> decreased, performance can be improved, and lifetime device can be prolong. So,
> how can we benchmark these points? And which approaches make sense to compare?
>

To start off, it would be nice to demonstrate that write amplification
decreases for some workload when the temperature is taken into
account. It would be great if the workload would be an actual
application workload or a synthetic one mimicking some real-world-like
use case.
Run the same workload twice, measure write amplification and compare results.

What user workloads do you see benefiting from this framework? Which would not?

> > Also, is there a good reason for only supporting buffered io? Direct
> > IO could benefit in the same way, right?
> >
>
> I think that Direct IO could benefit too. The question here how to account dirty
> memory pages and updated memory pages. Currently, I am using
> folio_account_dirtied() and folio_clear_dirty_for_io() to implement the
> calculation the temperature. As far as I can see, Direct IO requires another
> methods of doing this. The rest logic can be the same.

It's probably a good idea to cover direct IO as well then as this is
intended to be a generalized framework.





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux