On Thu 05-05-16 10:24:33, Michal Hocko wrote: > > +/* > > + * Check whether the request to writeback some pages can be merged with some > > + * other request which is already pending. If yes, merge it and return true. > > + * If no, return false. > > + */ > > +static bool wb_merge_request(struct bdi_writeback *wb, long nr_pages, > > + struct super_block *sb, bool range_cyclic, > > + enum wb_reason reason) > > +{ > > + struct wb_writeback_work *work; > > + bool merged = false; > > + > > + spin_lock_bh(&wb->work_lock); > > + list_for_each_entry(work, &wb->work_list, list) { > > Is the lenght of the list bounded somehow? In other words is it possible > that the spinlock would be held for too long to traverse the whole list? I was thinking about this as well. With the merging enabled, the number of entries queued from wb_start_writeback() is essentially limited by the number of writeback reasons and there's only a couple of those. What is more questionable is the number of entries queued from __writeback_inodes_sb_nr(). Generally there should be a couple at maximum either but it is hard to give any guarantee since e.g. filesystems use this function to reduce amount of delay-allocated data when they are running out of space. Hum, maybe we could limit the merging to scan only the last say 16 entries. That should give good results in most cases... Thoughts? Honza > > + if (work->reason == reason && > > + work->range_cyclic == range_cyclic && > > + work->auto_free == 1 && work->sb == sb && > > + work->for_sync == 0) { > > + work->nr_pages += nr_pages; > > + merged = true; > > + trace_writeback_merged(wb, work); > > + break; > > + } > > + } > > + spin_unlock_bh(&wb->work_lock); > > + > > + return merged; > > +} > -- > Michal Hocko > SUSE Labs -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html