what is the point of nr_pages information for the flusher thread?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Currently there's three possible values we pass into the flusher thread
for the nr_pages arguments:

 - in sync_inodes_sb and bdi_start_background_writeback:

	LONG_MAX

 - in writeback_inodes_sb and wb_check_old_data_flush:

	global_page_state(NR_FILE_DIRTY) +
	global_page_state(NR_UNSTABLE_NFS) +
	(inodes_stat.nr_inodes - inodes_stat.nr_unused)

 - in wakeup_flusher_threads and laptop_mode_timer_fn:

	global_page_state(NR_FILE_DIRTY) +
	global_page_state(NR_UNSTABLE_NFS)

The LONG_MAX cases are triviall explained, as we ignore the nr_to_write
value for data integrity writepage in the lowlevel writeback code, and
the for_background in bdi_start_background_writeback has it's own check
for the background threshold.  So far so good, and now it gets
interesting.

Why does writeback_inodes_sb add the number of used inodes into a value
that is in units of pages?  And why don't the other callers do this?

But seriously, how is the _global_ number of dirty and unstable pages
a good indicator for the amount of writeback per-bdi or superblock
anyway?

Somehow I'd feel much better about doing this calculation all the way
down in wb_writeback instead of the callers so we'll at least have
one documented place for these insanities.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux