Re: [PATCH 3/5] mm: Implement IO-less balance_dirty_pages()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2011-02-04 at 02:38 +0100, Jan Kara wrote:
> +void distribute_page_completions(struct work_struct *work)
> +{
> +       struct backing_dev_info *bdi =
> +               container_of(work, struct backing_dev_info, balance_work.work);
> +       unsigned long written = bdi_stat_sum(bdi, BDI_WRITTEN);
> +       unsigned long pages_per_waiter, remainder_pages;
> +       struct balance_waiter *waiter, *tmpw;
> +       struct dirty_limit_state st;
> +       int dirty_exceeded;
> +
> +       trace_writeback_distribute_page_completions(bdi, bdi->written_start,
> +                                         written - bdi->written_start);

So in fact you only need to pass bdi and written :-)

> +       dirty_exceeded = check_dirty_limits(bdi, &st);
> +       if (dirty_exceeded < DIRTY_MAY_EXCEED_LIMIT) {
> +               /* Wakeup everybody */
> +               trace_writeback_distribute_page_completions_wakeall(bdi);
> +               spin_lock(&bdi->balance_lock);
> +               list_for_each_entry_safe(
> +                               waiter, tmpw, &bdi->balance_list, bw_list)
> +                       balance_waiter_done(bdi, waiter);
> +               spin_unlock(&bdi->balance_lock);
> +               return;
> +       }
> +
> +       spin_lock(&bdi->balance_lock);

is there any reason this is a spinlock and not a mutex?

> +       /*
> +        * Note: This loop can have quadratic complexity in the number of
> +        * waiters. It can be changed to a linear one if we also maintained a
> +        * list sorted by number of pages. But for now that does not seem to be
> +        * worth the effort.
> +        */

That doesn't seem to explain much :/

> +       remainder_pages = written - bdi->written_start;
> +       bdi->written_start = written;
> +       while (!list_empty(&bdi->balance_list)) {
> +               pages_per_waiter = remainder_pages / bdi->balance_waiters;
> +               if (!pages_per_waiter)
> +                       break;

if remainder_pages < balance_waiters you just lost you delta, its best
to not set bdi->written_start until the end and leave everything not
processed for the next round.

> +               remainder_pages %= bdi->balance_waiters;
> +               list_for_each_entry_safe(
> +                               waiter, tmpw, &bdi->balance_list, bw_list) {
> +                       if (waiter->bw_to_write <= pages_per_waiter) {
> +                               remainder_pages += pages_per_waiter -
> +                                                  waiter->bw_to_write;
> +                               balance_waiter_done(bdi, waiter);
> +                               continue;
> +                       }
> +                       waiter->bw_to_write -= pages_per_waiter;
>                 }
> +       }
> +       /* Distribute remaining pages */
> +       list_for_each_entry_safe(waiter, tmpw, &bdi->balance_list, bw_list) {
> +               if (remainder_pages > 0) {
> +                       waiter->bw_to_write--;
> +                       remainder_pages--;
> +               }
> +               if (waiter->bw_to_write == 0 ||
> +                   (dirty_exceeded == DIRTY_MAY_EXCEED_LIMIT &&
> +                    !bdi_task_limit_exceeded(&st, waiter->bw_task)))
> +                       balance_waiter_done(bdi, waiter);
> +       }

OK, I see what you're doing, but I'm not quite sure it makes complete
sense yet.

  mutex_lock(&bdi->balance_mutex);
  for (;;) {
    unsigned long pages = written - bdi->written_start;
    unsigned long pages_per_waiter = pages / bdi->balance_waiters;
    if (!pages_per_waiter)
      break;
    list_for_each_entry_safe(waiter, tmpw, &bdi->balance_list, bw_list){
      unsigned long delta = min(pages_per_waiter, waiter->bw_to_write);

      bdi->written_start += delta;
      waiter->bw_to_write -= delta;
      if (!waiter->bw_to_write)
        balance_waiter_done(bdi, waiter);
    }
  }
  mutex_unlock(&bdi->balance_mutex);

Comes close to what you wrote I think.

One of the problems I have with it is that min(), it means that that
waiter waited too long, but will not be compensated for this by reducing
its next wait. Instead you give it away to other waiters which preserves
fairness on the bdi level, but not for tasks.

You can do that by keeping ->bw_to_write in task_struct and normalize it
by the estimated bdi bandwidth (patch 5), that way, when you next
increment it it will turn out to be lower and the wait will be shorter.

That also removes the need to loop over the waiters.


--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux