On 01/05/2018 01:17 PM, Michael Lyle wrote: > Bcache needs to scale the dirty data in the cache over the multiple > backing disks in order to calculate writeback rates for each. > The previous code did this by multiplying the target number of dirty > sectors by the backing device size, and expected it to fit into a > uint64_t; this blows up on relatively small backing devices. > > The new approach figures out the bdev's share in 16384ths of the overall > cached data. This is chosen to cope well when bdevs drastically vary in > size and to ensure that bcache can cross the petabyte boundary for each > backing device. > > This has been improved based on Tang Junhui's feedback to ensure that > every device gets a share of dirty data, no matter how small it is > compared to the total backing pool. > > Reported-by: Jack Douglas <jack@xxxxxxxxxxxxxxxxxxxxxxx> > Signed-off-by: Michael Lyle <mlyle@xxxxxxxx> Commentary: I don't love this, at all. It really should be the device's share of the dirty data, not the device's share of the backing size, that sets its share of the rate (so that if you have a 100GB cache vol, with a 10GB dirty target, and 5 backing devices of which only 2 are active.. those 2 can use all of the 10GB dirty). But we lack an appropriate accountancy mechanism right now so it has to be done this way. This has seen light test so far-- a lot of single-backing tests, a couple of 2 and 3 backing device tests. Mike